- monthly subscription or
- one time payment
- cancelable any time
"Tell the chef, the beer is on me."
The Nest thermostat has been gaining a lot of popularity recently, mostly due to its sleek design and enhanced learning capabilities, not to mention that it can be controlled via a smartphone. However, the makers of the app are bringing compatibility to Google Glass with a Nest app that will allow you to control your Nest using voice commands.
The Nest will be able to hear a number of commands, but it will only provide three main functions, which are setting the device to away mode, returning the device from away mode, and changing the temperature. You can say things like “set temperature to…” or “leaving the office now” to make sure the Nest wakes up from away mode.
The Nest app is available for Google Glass right now, but it’s only available to a select number of Nest users. However, once it goes live for all users, all you’ll need to do is login with your Nest credentials and you’ll be off to the races. The source code for the app is actually available on GitHub, so if you’re wanting to dive in right away, you can play around with it for a bit if you’re comfortable navigating your way around code.
We’ve seen a lot of apps make their way to Google Glass recently, with a slew of them being released during Google I/O, including Facebook, Twitter, Evernote, and CNN. More are surely to come over the summer, and we should be seeing a heap of apps already available before Google Glass hits the mainstream next year.
If you haven’t heard of Sherpa, you’re mostly likely not alone. It’s essentially a new Android app that looks to dethrone Google Now and Apple’s Siri. Sherpa plans to launch on iPhone soon, as well as make its way to Google Glass to take down Google’s own voice command software on the new spectacles.
The company unveiled plans to bring its software to Google Glass, and Sherpa CEO Xabier Uribe-Etxebarria says that their voice command app is much more suited for Google Glass than Google’s own software, which is a bold statement. Uribe-Etxebarria says that voice commands on Google Glass are limited, and “it’s not taking advantage of all the features and the potential of Google Glass.”
What separates Sherpa from the rest of the pack is its ability to understand meaning and intent. The app can build its own metalanguage with rules and semantic concepts mixed in with using Google’s speech API. From that, Sherpa can do a wide variety of actions that you wouldn’t even think it could do.
The app is still in beta, but plans to roll out for Google Glass and other wearables either in late 2013 or early 2014. Sherpa is able to do a handful of neat tricks, including the ability to play music without downloading the tracks first, and automate directions for scheduled events in your calendar. The app can also do things like turn down the volume or toggle the WiFi.
And as Google Now does, Sherpa can also essentially predict what you will want to see and hear about, like score updates from your favorite sports teams or letting you know of some popular restaurants in the area if it’s close to dinner time. As for the kinds of things that the Google Glass app will do, that’s still unknown, but from what the company says, we can expect a lot more features out of Sherpa than with Google’s built-in offering.
With almost everything being made in China these days, it’s particularly rare when a company announces plan to manufacturer something in the good ‘ole US of A. However, Apple recently announced that they’re going to do that with some of their Macs, and today it’s being reported that Google will be manufacturing its Glass eyewear in the US.
According to The Financial Times, Google will manufacturer Google Glass in Silicon Valley in California. The search giant will be partnering with Foxconn to assemble the futuristic eyewear at a facility in Santa Clara, California. This is according to several sources familiar with the company’s plans.
If this turns out to be true, it would not only boost the reputation of Google, but it would hopefully encourage other electronics manufacturers to bring business back to the US from countries like China and Taiwan. It’s said that in the coming weeks, only a few thousand Google Glass units will roll off the assembly line.
However, it’s not said exactly how many employees the facility will hire, nor is it said whether or not the employees will consist of current Foxconn workers from China, or if the company will hire all-new staff for the Santa Clara facility. Of course, bringing over current Foxconn workers would completely negate the whole “made in America” initiative, but we don’t think Google would be that naive.
[via The Financial Times]
News on Google’s Project Glass just keeps coming and coming. It’s no surprise that we’re extremely excited and interested in the AR tech, but now we will hopefully be learning addition details early next week. Wednesday we shared details about the VIP treatment we will be getting for pre-ordering a pair at Google IO for around $1,500 — and that treatment is about to start come Monday.
Google and their official +Project Glass Google+ account has just reached out to all the Explorer Edition buyers, confirming that we’ll be learning additional details in a private Google+ Hangout Monday. This will include other lucky pre-order customers, as well as members from Google’s Project Glass team. Hopefully while engaging in a live Google+ hangout with actual developers from Google we’ll be able to learn some neat new things about Project Glass. Obviously we will let you know the minute we hear anything worth mentioning.
Project Glass made a huge splash at Google IO, when Sergey Brin took the stage and had a pair of the AR eyewear skydiving right into the event center in San Francisco. Since then we’ve seen plenty of patents, learned a few more details, and even saw Gmail’s lead developer head to the Project Glass crew. Stay tuned for additional details and hit the timeline below for further coverage.
Recon Instruments will be showing their patented, Android-powered HUD (Heads Up Display) in the Android booth at Mobile World Congress. Last month at CES Recon unveiled their HUD SDK for Android, and this time around we will get to see the MOD Live (Recon's name for this unit) in Barcelona. They're currently working with partners like Polar and Contour to third party apps to the HUD, but for now folks can use Recon’s free HQ Mobile app to access playlists on their phone, view and share run stats, see incoming calls, display text messages, and reply via the wearable wireless Bluetooth low energy remote.
Currently available for snowsport enthusiasts, Recon plans to branch out to other outdoor sportsmen and partner with more manufacturers. Dan Eisenhardt, CEO of Recon Instruments says:
We are happy to be at the Mobile World Congress with Google at the Android stand. It is a great opportunity to show the diversity and customizable nature of our Android-powered MOD Live. Our HUD technology is currently available to snowsports enthusiasts and we will be bringing an adapted HUD solution into a number of different industries in the near future partnering with leading goggle, helmet and sunglasses brands to provide the optimum choice of fit, function and fashion to the public.
Recon Instruments' MOD Live currently works with Uvex, Briko, Zeal Optics, Alpina, and Scott and Smith brand "Recon Ready" goggles, and are available for $399.99 (€360) at major retailers around the globe.
Now, about those Google Glasses. ...
More info: Recon Instruments
The University of Washington and Microsoft Research have released information on a project they’ve been working on for some time now, one that should, if completed, allow those with diabetes to monitor their glucose levels through special contact lenses. After reporting weeks and weeks of tech news without such a thing, it’s nice to write about a medical breakthrough that comes in the form of gadget advancements in such an elegant vehicle as a contact lens. Without a doubt, if such a project can succeed, there’s no doubt we’re in the future – now we just need a pair that’ll allow me to see when a can of caffeine will have the best effect.
There’s a promotional video for this project that you’ll be able to see below, and the folks at Gizmag had a talk with Senior Researcher at Microsoft Research Connections Desney Tan to see what they’re all about. This isn’t the first set of experimental contact lenses that’d be able to help out those with diabetes make their life simpler, but it’s certainly the most advanced. As Tan notes:
“There are now various groups working on non-invasive measurement of tear glucose. Professor Zhang’s lab has been largely using nanostructured optical probes embedded in hydrophilic hydrogen lenses, and they’ve had some successes recently. This required a whole new engineering process, since traditional integrated circuit processes would not work. …
[We've] only begun to scratch the surface of the opportunities that exist with this type of platform. The most important challenge is really in the deep exploration of all the things not yet imagined with this platform, and new platforms enabled by this new-found capability to create other technology of this form.” – Tan
This project is creating what they’ve tentatively named Functional Contact Lenses, and within them is an enzyme which interacts with tear fluid. As the enzyme reacts, specific measurements are made as changes in current occur, this monitoring done by bio-compatible electrodes on the lens. As this project creeps ever closer to completion, Tan notes that its certainly possible that “as soon as everything is ready” the first models will report information wirelessly to a device inside its range, a device which “could be an augmented smart phone.” We’ll see about that!
Relevant Entries on SlashGear.com
Perhaps it makes us unbearably geeky, but we do have more than a soft spot for wearable computers. Unfortunately (or maybe fortunately for what little sartorial dignity we have left these days) manufacturers are proving more reluctant to put out suitable products, and that leaves the niche clear for DIYers. Martin Magnusson pointed us in the direction of his own project, taking a Vuzix Myvu Crystal eyepiece and hooking it up to a Beagleboard fanless computer and four AA batteries.
Connectivity is via Bluetooth, which tethers wirelessly to Martin’s iPhone, while input is through a Nokia Bluetooth keyboard. The Beagleboard is running Angstrom Linux and outputting graphics through S-Video; the OS is stored on an SD card.
So far Martin has experimented with a few different ways to carry his compact computer: the bandolier strap gets geek-points, but squeezing everything into a CD wallet and wearing it on a discrete shoulder strap probably makes you stand out less in public. The four AA batteries are good for up to three hours of use, though you could always slap a few more in if you don’t mind the extra weight. Geeky, yes, but still very cool if you ask us.
Relevant Entries on SlashGear
Vuzix’s WRAP 920AR, Kopin’s Golden-i, even Apple looks to be considering getting in on the augmented reality bandwagon. If you can’t be bothered waiting for an off-the-shelf system, then how about making your own headset that overlays digital graphics onto a real-world view? F00 at Tailor Made Toys took an Eye-Trek video headset and embedded a laptop webcam into the bridge, then hooked up an Eee PC for running AR software.
The Eye-Trek casing is big enough to take a laptop webcam assembly with little fettling, and that hooks up directly via USB to the netbook. F00 fitted a mini USB port so that he can switch out different lengths of USB cable; usually the Eee PC slips into his rucksack.
As for what it can be used for, an app called Camspace allows you to control the netbook using hand gestures, and F00 plans to add GPS functionality showing on-screen coordinates in the top corner of the display. In fact the only problem is that you’ll look vaguely ridiculous all the time, though you could always add some AR functionality that overlays happy expressions over their incredulous faces.
[via Hack A Day]
Relevant Entries on SlashGear
Don’t let anybody tell you tech blogging is all glamour; sometimes in the name of a great story – and showcasing a fantastic gadget – you end up looking pretty darn ridiculous. Wearable computing specialists Kopin were walking the MWC 2010 show floor giving demonstrations of their Golden-i head-mounted PC, which promises a 15-inch virtual display that can be voice-controlled while leaving your hands free. Check out our first-impressions and a demo video after the cut.
The Golden-i is the result of several companies collaboration: Kopin came up with the initial idea and supplied the SVGA microdisplay, while Motorola put the whole thing together. It’s based on a TI OMAP3 chipset, while Hillcrest Labs and Nuance had a hand in the motion-control and voice-recognition, respectively. OS is Windows CE embedded with a custom UI on top, and the battery is good for up to 8hrs use (or, in other words, a full industrial work shift). Connectivity includes Bluetooth 2.1, mini-USB and a microSD card slot, and the body of the PC is squeezed into the rear band section that’s specially designed to be comfortable even when wearing a hard-hat.
An adjustable speaker sits by your right ear, while the eyepiece is intended to fall just beneath your line of sight; Kopin have made two versions for those with left- or right-dominant eyes. Focusing is controlled by a simple thumbwheel, and the eyepiece also houses the dual microphone array. That’s important, since much of the interface is controlled by speech; from the main menu of icons, you merely speak which option you’d like – “photo gallery”, say – and the Golden-i takes you into it. You can then select individual files, again by voice, and manipulate them with straightforward commands: zoom in, for instance.
Photo and document viewing is linked to motion-control, so moving your head around pans around the zoomed-in picture; alternatively you can lock it, again with a verbal command. Accuracy was pretty much 100-percent, albeit in a relatively short trial; Kopin say they worked on Nuance’s voice-recognition systems to boost accuracy to around 98-percent. Of course, you could always hook up a QWERTY keyboard (either by USB or Bluetooth) for more extensive text-entry.
The next-gen models, beyond this development device, will be less bulky and have more functionality. Kopin say they’ve shaved a half-inch of thickness off the rear PC section, have a 1024 x 768 display that’s the same size as the SVGA panel, and that they’re putting in faster Bluetooth 3.0 too. They’ve also been looking at potential peripherals, everything from a laser keyboard and wirelessly connected modem, to a Bluetooth pen packed with gyroscopes that allows you to virtually sign a document just by gesturing in mid-air. There’ll also be a snap-on webcam, generally looking forward but detachable – and wireless – so you can direct it into enclosed spaces or turn it back on yourself for video conferencing.
Kopin’s target market is industrial users, such as engineers, warehouse managers and medical professionals, who need data access while simultaneously keeping their hands free. An engineer repairing a server, for instance, could consult the technical manual to check up which wire was which, without having to reach for their laptop. Broad availability isn’t for another year, but the company are offering this development version which comes with the open-source software (for user-customization; you could even put Android on it if you wanted to) and a Motorola ruggedized field-PDA right now. With a data connection, you can remotely log into a server or PC and access it as if you were at your desk, all using speech-commands.
Would I actually wear a consumer version when they go on sale in – according to Kopin – a few years time? At a trade show, despite the odd looks, yes, I reckon I would. Attempting to navigate the MWC 2010 halls was a recipe for collisions, as 50-percent of the people at any one time tried to simultaneously rush to their next appointment and look at their cellphone. When you start trying to throw in Twitter, responding to last-minute meeting room changes or tracking your schedule, all of a sudden the idea of a floating display you can quickly glance at without sustaining a glancing blow off the nearest pillar seems reasonably tempting.
Hopefully, by then the price will have dropped a little. The development package costs around $5,500, and while Motorola – who will be distributing the Golden-i when it sees its full launch – haven’t confirmed retail pricing they’ve said it will be in-line with a ruggedized PDA, which are around $2,500 themselves. Where it gets particularly exciting is when you start wondering how technologies like Texas Instrument’s gesture-recognition, also demonstrated this week, might fit in; Kopin admitted that augmented reality systems are also in the works, with the webcam tracking your hands and reacting accordingly. Yes, I might look pretty ridiculous, but I think I could suspend my shame for that.
Kopin Golden-i wearable PC demo:
Relevant Entries on SlashGear
Do you know what’s difficult to demonstrate on video? Vuzix’s WRAP 920AR augmented-reality video headset, that’s what – after all, while to the outside observer you could be merely enjoying some hands-free media playback from your PMP, in actual fact the eyewear is blending together a real-world view with computer-generated imagery on a virtual 67-inch display. We caught up with Vuzix to try the 920AR headset out, and collared Michael Kwan to pose for some photos and a brief video.
The system basically incorporates a set of Vuzix’s WRAP 920 eyewear while the two cameras together create a 1504 x 480 image that can be viewed in 3D. A motion-sensor tracks movement – including exact X/Y/Z position and roll/pitch/yaw – and moves the on-screen display according to your head movement.
In their demonstration, Vuzix had the 920AR hooked up (via USB 2.0) to a laptop, and handed us a board of AR glyphs which the software could identify, track and overlay with different graphics. Those graphics moved around in sync with head movements. Of course, the practical applications are pretty much up to the PC and Mac developers to whom Vuzix are hoping to bring on board; the obvious use is gaming, and we saw two similar systems – from the University of Singapore and Scope – last month.
However, since you could feasibly walk around with the 920AR headset in place (and not trip over) there’s also the potential for more futuristic heads-up display use from a wearable computer. The responsive performance of the Vuzix system has us wondering about when we’ll be able to connect our smartphone and have not only AR apps like Layar overlaid onto our everyday viewpoint, but floating SMS, email, IM and Twitter alerts too.
The WRAP 920AR headset will go on sale in Q2 2010, priced at $799.99. Vuzix also tell us that existing WRAP eyewear owners will be able to buy the stereo camera kit and motion-tracker to upgrade their systems, though no pricing or availability has been released.
Relevant Entries on SlashGear
The opportunity to jab yourself in the eye with a tiny computer display is one step closer, thanks to the ongoing work with opto-electronic contact lenses taking place at the University of Washington in Seattle. The lab there has been showing off the latest prototype, the handiwork of Dr. Babak Parviz: a semi-transparent array – including an LED – embedded into a contact lens that receives 330 microwatts of power wirelessly from a nearby RF transmitter. Parviz has been using the prototypes to display biosensor feedback about the wearer’s vital signs, but they’ll eventually serve as a heads-up display for displaying other data.
The wireless power is picked up by a loop antenna built into the lens, and future iterations of the hardware are expected to integrate the transmitter into a cellphone. There’ll also be far many more LEDs involved, so that the resolution is high enough to be useful.
“Conventional contact lenses are polymers formed in specific shapes to correct faulty vision. To turn such a lens into a functional system, we integrate control circuits, communication circuits, and miniature antennas into the lens using custom-built optoelectronic components. Those components will eventually include hundreds of LEDs, which will form images in front of the eye, such as words, charts, and photographs. Much of the hardware is semitransparent so that wearers can navigate their surroundings without crashing into them or becoming disoriented” Dr Parviz, University of Washington in Seattle
Future plans see the opto-electronic lenses being used for more than just displaying data; they’ll also be able to monitor the eye’s surface chemistry, which would allow wearable computers to keep track on blood sugar levels in diabetics and other information. Parviz’s eventual goal is the contact lens becoming a platform “like the iPhone is today”, with developers creating custom apps. However it seems that’s a reasonably long way off into the distance.
Relevant Entries on SlashGear
"Tell the chef, the beer is on me."
"Basically the price of a night on the town!"
"I'd love to help kickstart continued development! And 0 EUR/month really does make fiscal sense too... maybe I'll even get a shirt?" (there will be limited edition shirts for two and other goodies for each supporter as soon as we sold the 200)