Fitness-tracking wristbands and bracelets have mostly been used to count steps and monitor heart rate and vital signs. Now engineers have made a 3D-printed sensor that can be worn on the ear to continuously track core body temperature for fitness and medical needs.
The “earable” also serves as a hearing aid. And it could be a platform for sensing several other vital signs, says University of California Berkeley electrical engineering and computer science professor Ali Javey.
Core body temperature is a basic indicator of health issues such as fever, insomnia, fatigue, metabolic functionality, and depression. Measuring it continuously is critical for infants, elderly and those with severe conditions, says Javey. But wearable sensors available today in the form of wristbands and soft patches monitor skin temperature, which can change with the environment and is usually different from body temperature.
Body temperature can be measured using invasive oral or rectal readings. Ear thermometers measure infrared energy emitted from the eardrum and are easier to use than more invasive devices. That’s the route Javey and his colleagues took for their earable sensor, reported in the journal ACS Sensors.
For a customized fit to an individual’s ear, the team printed their sensor using flexible materials and a 3D printer. First they printed a gauzy, disc-shaped base using a stretchable polymer. This base contains tiny channels into which the researchers inject liquid metal to make electrical interconnects in lieu of metal wires. It also has grooves for an infrared sensor; microprocessors; and a Bluetooth module that transmits temperature readings to a smartphone app. They packaged the gadget in a 3D-printed case.
Because the device covers the ear, it could affect hearing, Javey says. So the engineers also embedded a bone-conduction hearing aid, made of a microphone; data-processing circuitry; a potentiometer for adjusting volume; and an actuator. The actuator sits by the temple and converts sound to vibrations, which are transmitted through the skull bone to the inner ear.
The earable accurately measured the core body temperature of volunteers wearing it in rooms heated or cooled to various temperatures, and while exercising on a stationary bicycle.
“It can be worn continuously for around 12 hours without recharging,” he says. “In the future, power can be further reduced by using lower power electronic components, including the Bluetooth module.”
The researchers plan to increase the device’s functionality by integrating sensors for measuring EEG, heart rate, and blood oxygen level. They also plan to test it in various environments.
Toys that teach kids to code are as hot in 2017 as Cabbage Patch Kids were in 1983, and for good reason. For today’s generation of children, learning how to program is even more important than studying a second language. Though there are many robot kits on the market that are designed for this purpose, Lego Boost is the best tech-learning tool we’ve seen for kids. Priced at a very reasonable $159, Boost provides the pieces to build five different robots, along with an entertaining app that turns learning into a game that even preliterate children can master.
How It Works
Boost comes with a whopping 847 different Lego bricks, along with one motor (which also serves as a dial control on some projects), one light/IR sensor and the Move Hub, a large white and gray brick with two built-in motors that serves as the central processing unit for the robot. The Hub connects to your tablet via Bluetooth, to receive your programming code, and to the other two electronic components via wires.
You can build five different robots with the kit: a humanoid robot named Vernie, Frankie the Cat, the Guitar 4000 (which plays real music), a forklift called the “M.I.R. 4” and a robotic “Auto Builder” car factory. Lego said that it expects most users to start with Vernie, who looks like a cross between film robots Johnny No. 5 and Wall-E and offers the most functionality.
To get started building and coding, kids have to download the Boost app to their iPad or Android tablets. You’ll need to have the app running and connected to the Move hub every time you use the robot. All of the processing and programming takes place on your mobile device, and the sound effects (music, the robot talking) will come out of your tablet’s speaker, not the robot itself.
The Boost App
Lego really understands how young children learn and has designed the perfect interface for them. The Boost app strikes a balance among simplicity, depth and fun. Boost is officially targeted at 7- to 12-year-olds, but the software is so intuitive and engaging that, within minutes of seeing the system, my 5-year-old was writing his own programs and begging me to extend his bedtime so he could discover more.
Neither the interface nor the block-based programming language contains any written words, so even children who can’t read can use every feature of the app. When you launch Boost, you’re first shown a cartoonish menu screen that looks like a room with all the different possible robots sitting in different spots. You just tap on the image of the robot you want to build or program, and you’re given a set of activities that begin with building the most basic parts of the project and coding them.
As you navigate through the Boost program, you need to complete the simplest levels within each robot section before you can unlock the more complicated ones. Any child who has played video games is familiar with and motivated by the concept of unlocking new features by successfully completing old ones. This level-based system turns the entire learning process into a game and also keeps kids from getting frustrated by trying advanced concepts before they’re ready.
Boost runs on modern iPads or Android devices that have at least a 1.4-GHz CPU, 1GB of RAM, Bluetooth LE, and Android 5.0 or above. (I also downloaded Boost to a smartphone, but the screen was so small that it was difficult to make out some of the diagrams.)
Unfortunately, Lego doesn’t plan to list the program in Amazon’s app store, which means you can’t easily use Boost with a Fire tablet, which is the top-selling tablet in the U.S. I was able to sideload Boost onto my son’s Fire 7 Kids Edition, but most users won’t have the wherewithal to do that. Lego makes its Mindstorm app available to Fire devices, so we hope the company will eventually see fit to do the same with Boost.
Unlocking New Levels and Challenges
When you load the Boost app for the first time, you need to complete a simple project that involves making a small buggy before you can build any of the five robots. This initial build is pretty fast, because it involves only basic things like putting wheels onto the car, programming it to move forward and attaching a small fan in the back.
Like the robot projects that come after it, the buggy build is broken down into three separate challenges, each of which builds on the prior one. The first challenge involves building the buggy and programming it to roll forward. Subsequent challenges involve programming the vehicle’s infrared sensor and making the fan in the back move.
After you’ve completed all three buggy challenges, the five regular robots are unlocked. Each robot has several levels within it, each of which contains challenges that you must complete. For example, Vernie’s first level has three challenges that help you build him and use his basic functions, while the second level has you add a rocket launcher to his body and program him to shoot.
If a challenge includes building or adding blocks to a robot, it gives you step-by-step instructions that show you which blocks go where, and only after you’ve gone through these steps do you get to the programming portion.
When it’s time to code, the app shows animations of a finger dragging the coding blocks from a palette on the bottom of the screen up onto the canvas, placing them next to each other and hitting a play button to run the program. This lets the user know exactly what to do at every step, but also offers the ability to experiment by modifying the programs at the end of each challenge.
In Vernie’s case, each of the first-level challenges involve building part of his body. Lego Design Director Simon Kent explained to us that, because a full build can take hours, the company wants children to be able to start programming before they’re even finished. So, in the first challenge, you build the head and torso, then program him to move his neck, while in the later ones, you add his wheels and then his arms.
Block-Based Programming Language
Like almost all child-coding apps, Boost uses a pictorial, block-based programming language that involves dragging interlocking pieces together, rather than keying in text. However, unlike some programming kits we’ve seen, which require you to read text on the blocks to find out what they do, Boost’s system is completely icon-based, making it ideal for children who can’t read (or can’t read very well) yet.
For example, instead of seeing a block that says, “Move Forward” or “Turn right 90 degrees,” you see blocks with arrows on them. All of the available blocks are located on a palette at the bottom of the screen; you drag them up onto the canvas and lock them together to write programs.
Some of the icons on the blocks are less intuitive than an arrow or a play button, but Boost shows you (with an animation) exactly which blocks you need in order to complete each challenge. It then lets you experiment with additional blocks to see what they do.
What makes the app such a great learning tool is that it really encourages and rewards discovery. In one of the first Vernie lessons, there were several blocks with icons showing the robot’s head at different angles. My son was eager to drag each one into a program to see exactly what it did (most turned the neck).
Programs can begin with either a play button, which just means “start this action” or a condition such as shaking Vernie’s hand or putting an object in front of the robot’s infrared sensor. You can launch a program, either by tapping on its play/condition button or on the play button in the upper right corner of the screen, which runs every program you have on screen at once.
Because the programs are mostly so simple, there are many reasons why you might want to have several running at once. For example, when my son was programming for the guitar robot, he had a program that played a sound when the slider on the neck passed over the red tiles, another one for when it passed over the green tiles and yet another for the blue tiles. In a complex adult program, these would be handled by an if/then statement, but in Boost, there are few loops (you can use them in the Creative Canvas free-play mode if you want), so making several separate programs is necessary.
While the program(s) run, each block lights up as it executes, so you know exactly what’s going on at any time. You can even add and remove blocks, and the programs will keep on executing. I wish all the adult programming tools I use at work had these features!
Toolboxes, Custom Programs
Though you write programs as part of each the challenges, if you really want to get creative, you need to head to the Coding Canvas mode. In each robot’s menu, to the right of the levels, there’s a red toolbox that you can tap on to write your own custom programs. As you complete different challenges that feature new functions, your Coding Canvas toolbox gets filled up with more code blocks that you can use.
My son had an absolute blast using the Guitar 4000’s toolbox mode to write a program in which moving the slider over the different colors on the guitar neck would play different clips of his voice.
Users who want to build their own custom robots and program them can head over to the Creative Canvas free-play mode by tapping on the open-window picture on the main menu. There, you can create new programs with blocks that control exactly what the Move Hub, IR sensor and motor do. So, rather than showing an icon with a block of a guitar playing like it does from within the Guitar 4000 menus, Boost shows a block with a speaker on it, because you can choose any type of sound from your custom robot.
In both Creative Canvas and Coding Canvas modes, Lego makes it easy to save your custom programs. The software automatically assigns names (which, coincidentally, are the names of famous Lego characters) and colorful icons to each of your programs for you, but children who can read and type are free to alter the names. All changes to programs are autosaved, so you never have to worry about losing your work.
As you might expect from Lego, Boost offers a best-in-class building experience with near-infinite expandability and customization. The kit comes with 847 Lego pieces, which include a combination of traditional-style bricks, with their knobs and grooves, and Technics-style bricks that use holes and plugs.
The building process for any of the Boost robots (Vernie, Frankie the Cat, M.I.R. 4, Guitar 4000 and Auto Builder) is lengthy but very straightforward. During testing, we built both Vernie and the Guitar 4000 robots, and each took around 2 hours for adults to complete. Younger kids, who have less patience and worse hand-eye coordination, will probably need help from an adult or older child, but building these bots provides a great opportunity for parent/child bonding time. My 5 year old (2 years below the recommended age) and I had a lot of fun putting the guitar together.
As part of the first challenge (or first several challenges), the app gives you a set of step-by-step instructions that show which bricks to put where. The illustrated instruction screens are very detailed and look identical to the paper Lego instructions you may have seen on any of the company’s kits. I just wish that the app made these illustrations 3D so one could rotate them and see the build from different angles like you can on UBTech’s Jimu Robots kit app.
All of the bricks connect together seamlessly and will work with any other bricks you already own. You could also easily customize one of the five recommended Boost robots with your own bricks. Imagine adorning Varney’s body with pieces from a Star Wars set or letting your Batman minifig ride on the MIR 4 forklift.
I really love the sky-blue, orange and gray color scheme Lego chose for the bricks that come with Boost, because it has an aesthetic that looks both high-tech and fun. From the orange wings on the Guitar 4000 robot to Vernie’s funky eyebrows, everything about the blocks screams “fun” and “inviting.”
Boost Versus Mindstorm and the Competition
At $159, the Lego Boost offers more for the money than any of the other robot kits we’ve reviewed, but it’s definitely designed for younger children who are new to programming. Older children or those who’ve used Boost for a while can graduate to Lego’s own Mindstorm EV3 kits, which start at $349 and use their own block-based coding language.
Starting at $129, UBTech’s line of Jimu robots offer a few more sensors and motors than Boost, along with a more complex programming language, but they definitely target older and more experienced kids, and to get a kit that makes more than one or two robots, you need to spend over $300. Sony’s Koov kit is also a good choice for older and more tech-savvy children, but it’s also way more expensive than Boost (starts at $199, but you need to spend at least $349 to get most features), and its set of blocks is much less versatile than Legos.
Tenka Labs’ Circuit Cubes start at just $59 and provide a series of lights and motors that come with Lego-compatible bricks, but these kits teach electronics skills, not programming.
The best robot/STEM kit we’ve seen for younger children, Lego Boost provides turns coding into a game that’s so much fun your kids won’t even know that they’re gaining valuable skills. Because it uses real Legos, Boost also invites a lot of creativity and replayability, and at $159, it’s practically a steal.
It’s a shame that millions of kids who use Amazon Fire tablets are left out of the Boost party, but hopefully, Lego will rectify this problem in the near future. Parents of older children with more programming savvy might want to consider a more complex robot set such as Mindstorms or Koov, but if your kid is new to coding and has access to a compatible device, the Boost is a must-buy.
I’m generally not the person you want testing your virtual, augmented, or otherwise “enhanced” reality technology. I am horribly susceptible to motion sickness, my presbyopia makes focusing on Google glass–like displays pretty much impossible, and even 3D movies do not make my eyes happy. Using a good virtual reality system, I can go maybe 30 seconds before I have escape to the real world; with a phone-based system, even a couple of seconds is too much.
But last week I spent at least 15 minutes (though it felt like less than five) completely engaged in a sampling of virtual worlds seen through Avegant’s mixed reality viewer. The experience was magical, enthralling, amazing, wonderful—pick your superlative. I didn’t get nauseous, or headachy, or feel any eyestrain at all. Indeed my eyes felt rested (probably because that was 15 minutes not spent in front of a computer or phone screen). Also a wonderful part of the experience: the fact that the company didn’t bother with extreme security measures or nondisclosure agreements (though executives are not talking specific technical details until patent filings are complete.
Avegant is a four-year-old Belmont, Calif., based startup. Its first product, the Glyph head-mounted display typically used for personal entertainment viewing, has been shipping since February of last year. (The name is a mashup of the names of the founders—Edward Tang and Allan Evans.)
The company announced its transparent Light Field Display technology last month. It hasn’t said when this will be ready for manufacture, though Tang points out that the Glyph’s success shows that the company knows how to design products for manufacture and bring them to market.
Avegant’s prototype mixed reality system uses a headband to position the Avegant display. It is driven by an IBM Windows PC with an Intel i7 processor and an Nvidia graphics card running the Unity game engine.
The images, explained cofounder Tang, now chief technology officer, are projected onto the retina by an array of MEMS micromirrors, each of which controls one pixel.
That, so far, is the same as the company’s Glyph system. But unlike a standard micromirror display, which reflects light straight at the person viewing it, these light field images are projected at different angles, mimicking the way light in the real world reflects off objects to hit a person’s eyes. The difference in these angles is particularly dramatic the closer someone is to the object, creating distinct and separate focal planes; the eye naturally refocuses when it moves from one plane to another.
To avoid having the eyes deal with these multiple focal planes, explained Tang, mixed reality systems like Microsoft’s HoloLens tend to keep viewers a meter or two away from objects. Light field technology, however, can use different focal planes for different objects simultaneously, so the user perceives even very close-up objects to be realistic. (Tang makes the case for light field technology in the video below.)
To date, Tang says, most attempts to bring light field technology into head-mounted displays have involved tricky-to-manufacture technology like deformable mirrors or liquid lenses, or approaches that take huge amounts of computing power to operate, like stacked LCDs.
“We created a new method,” he said, “that has no mechanical parts and uses existing manufacturing capabilities, with a level of computation that isn’t particularly high; it can run on standard PCs with graphics cards or mobile chipsets.”
The effect is designed to be natural—that is, you see virtual objects in the same way you normally see real objects, with no eye strain caused from struggling to focus. And, in the demo I was shown, it absolutely was.
I went through two mixed reality experiences in a slightly dim but not dark room with some basic furniture. The room was rigged with off-the-shelf motion tracking cameras to help map my position; the headset I wore was tethered to a PC. After a short calibration effort that allowed me to adjust the display to match the distance between my pupils, I entered a solar system visualization, walking among planets, peering up close at particular features (Earth seemed to be a little smaller than my head in this demo), and leaning even closer to trigger the playing of related audio.
Clear labels hovered near each planet, which brings up an interesting side note: I wasn’t wearing my reading glasses, but the labels, even close at hand, were quite clear. Tang mentioned that the developers have been discussing whether, for those of us who do need reading glasses, it would be more realistic to make the virtual objects as blurry as the real ones. I vote no, I didn’t find it jarring that my hand as I used it to reach for planets was a little fuzzy, particularly, perhaps, since the virtual objects were appearing brighter than real world ones. And it was quite lovely having so much of what I was seeing be clear.
At one point in the demo, while I was checking out asteroids near Saturn, Tang suggested that I step into the asteroid belt. I was a bit apprehensive; with my VR sickness history, it seemed that watching a flow of asteroids whizzing by me on both sides would be a uniquely bad idea, but it went just fine, and I could observe quite a bit of detail in the asteroids as they flowed past me.
The second demo involved a virtual fish tank. Tang asked me to walk over to a coffee table and look down at the surface; the fish tank then appeared, sitting on top of the table. I squatted next to the tank and put my hand into it. I reached out for a sea turtle; it was just the right size to fit in my palm. I followed it with my cupped hand for a while, and started feeling a whoosh of air across my palm whenever it swept its flippers back. I wondered for a moment if there was some virtual touch gear around, but it turned out to just be my mind filling in a few blanks in the very real scene. Tang then expanded the fish tank to fill the room; now that sea turtle was too big to hold, but I couldn’t resist trying to pet it. Then, he told me, “Check out that chair,” and in a moment, a school of tiny fish swept out from under the chair legs and swooped around the nearby furniture.
After convincing me to leave the fish demo (I was enjoying the experience of snorkeling without getting wet), Tang directed me to walk towards a female avatar. She was a computer-generated human that didn’t quite leave the uncanny valley—just a standard videogame avatar downloaded from a library, Tang said. But he pointed out that I could move up and invade her personal space and watch her expression change. And it certainly did seem that this avatar was in the room with me.
Throughout all the demos, I didn’t encounter any vision issues, focus struggles, or other discomfort as I looked back and forth between near and far and real and virtual objects.
I have not been one of the anointed few who have tested Magic Leap’s much-ballyhooed light-field-based mixed reality technology (and given the company’s extreme nondisclosure agreements, I likely couldn’t say much about it if I had). So, I don’t know how Avegant’s approach compares, though I’d be willing to put Avegant’s turtle up against Magic Leap’s elephant any day.
What I do know is that it absolutely blew me away. I’m eager to see what developers eventually do with it, and I’m thrilled that I no longer have to struggle physically to visit virtual worlds.
Google’s product launch on Tuesday was as much a jab at Apple’s iPhone as a sales pitch for its new Pixel phones, with executives from the Mountain View internet search company taking shots at their competitor at every turn.
But any gains Google makes with the $649 (roughly Rs. 43,000) Pixel, billed as completely designed in-house, may come not at the expense of Apple, but phone manufacturers running its Android software, a list topped by Samsung.
“A premium Android strategy is really a strategy to take market share from Samsung,” said analyst Jan Dawson of Jackdaw Research. The South Korean company already is reeling from a highly publicized recall of its Galaxy Note 7 phones due to battery fires.
“Obviously Google doesn’t want to explicitly compete with its own partners, but this product is much more likely to compete with Samsung than Apple,” Dawson said.
Google, a unit of Alphabet Inc, clearly has its sights set on the iPhone and the luxury consumer base that it commands.
“There’s no unsightly camera bump,” hardware chief Rick Osterloh said to laughter from the audience at the phone’s debut, alluding to the iPhone’s raised camera, a feature lamented by some design aficionados.
Newly released ads for the Pixel phones land some blows on the iPhone. A rundown of the phones’ new features concludes with “3.5mm headphone jack satisfyingly not new,” a reference to Apple’s decision to eliminate the port in the iPhone 7, which riled many customers.
Imitation is flattery
Nevertheless, the Pixel line bears a strong resemblance to the iPhone, coming in two sizes and a variety of sleek finishes. The Google Assistant, powered by artificial intelligence software, is a response to Apple’s Siri. And as Google prioritizes making its own hardware under Osterloh, its emerging design philosophy echoes Apple’s.
Hardware executive Mario Queiroz touted the company’s attention to packaging, a feature that the late Apple CEO Steve Jobs famously obsessed over.
“You want the consumer first of all to have this great experience out of the box in terms of the design of the packaging,” Queiroz, a vice president of product management at Google, said in an interview.
He brushed aside concerns that Google’s hardware push will pit it against its Android partners. The technology embedded in the Pixel phone is meant to propel Android devices forward, he said.
“It’s not a zero sum game,” Queiroz said. “We believe that Google can and will be doing both things. Both delivering platforms and building our own products.”
Google could find itself squaring off against two extremely deep-pocketed rivals. Apple and Samsung are the largest smartphone handset makers and both have major marketing programs.
Samsung spent at least $50 million (roughly Rs. 332 crores) just on advertising during the Olympics in Rio de Janeiro, Brazil, according to estimates from Kantar Media.
Spokespeople for Apple and Samsung did not respond to requests for comment on Google’s launch.
Chinese technology giant Lenovo will bring the much-awaited modular smartphone Moto Z to the Indian market this festive season.
The Moto Z, which was unveiled globally in June this year, allows users to attach a set of accessories called Moto Mods to the back of the device that adds various functionalities to the device.
“We will launch 8 new devices, of which six we have already announced, in this festive season… This includes the Moto Z,” Lenovo India Executive Director Mobile Business Group Sudhin Mathur told PTI.
While Mathur declined to comment on the timeline of availability of Moto Z in India, sources said the product could be launched in the first or second week of October.
Moto Z is currently available in markets like the US, the UK, and Latin America in three models (Moto Z Play, Moto Z, and Moto Z Force).
Lenovo, which had acquired Motorola from Google in a $2.9 billion deal in 2014, is betting on India to contribute significantly to its global growth. Last year, Lenovo’s revenues from India grew about 90 per cent while its overall revenues were up 68 percent.
According to research firm Gartner, smartphone sales are expected to slow down in 2016 globally, rising only seven percent compared to double-digit growth seen in previous years.
This is on the back of slower sales growth in mature markets like Europe and Japan. India, on the other hand, remains an opportunity and presents the highest growth potential, Gartner said.
Google may be getting serious about selling its own hardware gadgets.
On Tuesday, the search giant will ramp up its consumer electronics strategy with expected announcements of new gadgets including new smartphones and an Internet-connected personal-assistant for the home similar to Amazon’s Echo speaker. All are intended to showcase Google’s software and online services.
A new virtual reality headset and other devices, such as a home router, could also be on tap, according to analysts and industry blogs. Google has declined to confirm any specifics, although it previously described some of these products back in May.
Google makes most of its money from online software and digital ads. But it’s putting more emphasis on hardware as it faces rivals like Apple, Amazon and South Korea’s Samsung.
Hardware is hard
New devices could help Google keep its services front and center in the battle for consumers’ attention, said analyst Julie Ask at Forrester Research. Unlike a new mobile app or other software, she noted, it can be an expensive gamble to build and ship new hardware products. “But if you’re Google, you can’t afford to stop placing bets.”
Google already sells smartphones and tablets under the Nexus brand, which it launched in 2010 as a way to show off the best features of its Android software. But it’s spent relatively little effort to promote those devices, which have mostly ended up in the hands of Google purists. Tech blogs are reporting the company is now planning to launch two smartphone models under a new brand, Pixel, and Google has hinted it may invest in an extensive marketing campaign intended to introduce the phones to the mass market.
Android already powers the majority of smartphones sold around the world. But Samsung, the biggest maker of Android phones, has increasingly been adding more of its own software – even its own Samsung Pay mobile wallet – on the phones it sells. Another big rival, Apple, has built its own services, such as online maps and its own Siri personal assistant, to replace Google’s apps on the iPhone.
Home, but not alone
Google is also likely to begin selling a voice-activated “smart speaker” called Home, apparently modeled on Amazon’s Echo . Analysts are expecting Google will announce more details, including price and availability, at Tuesday’s event.
The “Home” device will feature Google’s digital “Assistant” service, a voice-activated personal butler that can search the Internet, play music or perform other useful tasks. “Assistant” is the company’s answer to similar concierge services from rivals, including Siri, Amazon’s Alexa and Microsoft’s Cortana. The leading tech companies are all competing to assist consumers in their online activities such as shopping, since that gives the companies a better chance of selling advertising or other services.
Home-based systems like the Echo are taking on more importance with the advent of improved voice technology, said Forrester’s Ask. “You can’t assume somebody is going to go sit down at a computer or pick up a phone and type in a question anymore,” she said.
Google may also provide a closer look Tuesday at some other products, including a new virtual-reality headset that it teased in May. Like the other devices, Google’s virtual reality system could be a platform for a wide range of games and applications that are built on Google’s software.
After reports of a bug that causes a loss in cellular service after disabling Airplane Mode on the new iPhone 7 and iPhone 7 Plus, some more glitches have reportedly been found on Apple’s latest offerings. This time the glitch pertains to the Lightning port, which appears to disable in-line controls on the connected headset after a period of no playback. Apple has acknowledged the issue, and says a fix will be issued via a software update.
Reported by several iPhone 7 and iPhone 7 Plus users across the Web, the bug affects both the bundled Lightning EarPods and third-party headsets that are connected via the Lightning-to-3.5mm Headphone Jack Adapter. The issue reportedly occurs when the smartphone’s display is off for five minutes, with the headset connected but not playing back audio during that time. After this point, audio playback will work, but users cannot adjust the volume, activate Siri, or answer calls using the in-line controls on the EarPods or third-party headsets.
Furthermore, the glitch is an on and off thing in that the issue is not persistent. Those experiencing the issue can remove and plug in the affected headset again. This is an easy, temporary fix but it doesn’t solve the underlying problem, which appears to be a software issue related to Lightning port power-saving features.
Apple has acknowledged the issue and is working on a fix that should be brought to users via a software update in the near future, an Apple representative confirmed to Business Insider.
The Cupertino giant controversially dropped the 3.5mm headphone jack with the launch of the iPhone 7 and iPhone 7 Plus in September, though Apple wasn’t the first to do so – though it claims it required “courage” to make the decision. Other major brands like Lenovo quietly did away with the headphone jack a month before with some models of the Moto Z.
The decision to drop the iconic headphone jack came from Apple’s need to free up space for newer technologies and to make use of the Lightning port for higher quality audio output. The dual-camera setup, Taptic engine for the pressure sensitive button, water resistance and a 14 percent bigger battery were all made possible by the removal of the 3.5mm port, according to Apple’s own claims. Of course, teething issues like these do not help making the decision to drop the 3.5mm headphone jack acceptable to customers.
Screen-testing firm DisplayMate termed the display on Samsung Galaxy Note 7 as the “Best Performing Smartphone Display” that the company has ever tested. Now, the company has tested the display on iPhone 7 and claims that it is the “best performing mobile LCD display” that it has ever come across, while it also sets some overall records.
DisplayMate’s President Dr. Raymond M. Soneira said that even though in size and resolution the iPhone 7’s display is indistinguishable from the iPhone 6 and iPhone 6s, the smartphone packs a display that is ‘truly impressive’ and ‘major enhancement’ on the display from iPhone 6.
One of the main reasons for the improvement, as pointed out by DisplayMate, is the presence of two standard colour gamuts. The iPhone 7 carries a “DCI-P3 Wide Color Gamut that is generally used in 4K UHD TVs and Digital Cinema” and a “traditional smaller sRGB / Rec.709 Color Gamut,” the company said.
DisplayMate says that both colour gamuts have been implemented with absolute colour accuracy that is “Visually Indistinguishable from Perfect.” According to the company, iPhone 7’s display is rated to deliver 625 nits of brightness, but in fact only delivers 602 nits – which is still the “Highest Peak Brightness” DisplayMate has measured on any smartphone. However, the actual peak brightness is even higher, at 705 nits, when Automatic Brightness is turned on in high ambient light conditions.
As per DisplayMate, the displays on Apple’s smartphones have a record high contrast ratio for IPS LCD displays and a record low screen reflectance for smartphones.
Regarding the power efficiency of the display, Soneira said that wide colour gamut LCDs like the iPhone 7 use “specially tuned Red and Green phosphors to optimally transform the light for the chosen saturated Red and Green primaries, which improves their light and power efficiency.”
It has been rumoured that Apple’s iPhone for 2017 might be shipping with an Oled display. If this turns out to be true, iPhone 7’s display might be the last huge leap that Apple takes with LCD displays.
Have you noticed that most Facebook apps these days have a camera button built in? Well, says Facebook CEO Mark Zuckerberg, now it’s time to use those buttons to turn on augmented reality for just about everything you’re doing in Facebook’s world.
“We are making the camera the first augmented reality platform,” Zuckerberg said, kicking off Facebook’s F8 developer conference in San Jose this morning. “I used to think glasses would be the first mainstream augmented reality platform,” he said. But he’s changed his mind.
By “camera,” Zuckerberg really means the camera button (which allows users to directly access a mobile device’s actual camera) and related photo processing tools in Facebook and related apps. Now, Zuckerberg said, Facebook is going to roll out tools to allow developers to create augmented reality experiences that can be reached through that photo feature. These tools will include precise location mapping, creation of 3D objects from 2D images, and object recognition.
Developers, he expects, will be able to apply these tools to generate virtual images that appear to interact directly with the real environment. For example, fish will swim around on your kitchen table and appear to go behind your real cereal bowl, virtual flowers will blossom on a real plant, virtual steam will come out of a real coffee mug, or a virtual companion’s mug will appear next to yours on your table in order to make your breakfast routine feel a little less lonely. Augmented reality will also allow users to leave notes for friends in specific locations—say, at a table in a particular restaurant—or let them view pop-up labels tagged to real world objects.
“Augmented reality will let us mix the digital and the physical,” Zuckerberg said in his keynote address to 4000 developers, “and that will make our physical reality better.”
Zuckerberg also predicted the advent of augmented reality street art, and suggested that as technology makes people working in traditional jobs more productive, more and more people will contribute to society through the arts.
Zuckerberg said that it will take a while to roll some of these experiences out into the world, but developers can get started now, with a closed Beta version of its AR Studio software now launching. Also available to users beginning today: a limited library of augmented effects.
LG introduced the K7 smartphone in India earlier this year, and now the company has added some interesting features specifically for visually challenged users. The LG K7 was one of the first smartphones to flag off the company’s K-series, and the LG K7 LTE variant was also the first to be manufactured in India.
The LG K7 price remains unchanged at Rs. 9,500, but the company has updated the smartphone with new features like screen reading software (Talk back), and Text to Screen (TTS) language pack support. It will also come with in-built apps like Cozy Daisy Book Reader App (with three books preloaded), Voice Reader App, eReader App, GPS Essential App, Kota Magnifier App, and Play Magazine app.
The LG K7 smartphone will have call and text voice reading enabled by default, and will also have two word books and two PDF books preloaded. The company notes that a batch of these updated LG K7 units were distributed to differently abled people at Samajik Adhikarita Shivir in Gujarat, as part of Prime Minister Narendra Modi’s birthday celebrations over the weekend.
“We are happy and extremely proud to make the smartphone more relevant for differently abled users, we have included special features to the K7. We hope that by doing this, we will be able to make a small difference to someone’s life and spread happiness.” Kim Ki-Wan, Managing Director, LG India said in a statement.
As for the technical specifications, they remain the same. The LG K7 features a 5-inch FWVGA (480×854 pixels) resolution display. It runs on Android 5.1 Lollipop, and packs a 1.1GHz quad-core SoC coupled with 1.5GB of RAM. It bears a 5-megapixel rear camera; a 5-megapixel front-facing camera; 8GB of inbuilt storage, and 2125mAh battery. It supports 4G LTE, and measures at 143.6×72.5×8.9mm.