CCE Blog http://cceblog.com Tech, Lifestyle, Entertainment Thu, 18 Jan 2018 12:16:35 +0000 en-US hourly 1 https://wordpress.org/?v=4.9.2 Slug-Inspired Glue Patches Beating Hearts http://cceblog.com/slug-inspired-glue-patches-beating-hearts/ http://cceblog.com/slug-inspired-glue-patches-beating-hearts/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/slug-inspired-glue-patches-beating-hearts/ Continue reading "Slug-Inspired Glue Patches Beating Hearts"

]]>

The adhesive, described today (July 27) in a new study in the journal Science, sticks to wet surfaces, including the surface of a beating heart. It isn’t toxic to cells, which gives it an advantage over many surgical glues. It’s not available in operating rooms just yet — its developers say that could take years — but it could potentially be approved much more quickly for applications such as closing skin wounds.

The slug-inspired glue is “very stretchy and very tough,” said Jianyu Li, a postdoctoral researcher at Harvard University’s Wyss Institute for Biologically Inspired Engineering and the lead author of the study. Li and his colleagues applied the adhesive to a blood-soaked, beating pig heart and found that it worked better than any other surgical glue on the market.

The inspiration for the glue came from Arion subfuscus, a large and slimy species of slug found in North America and western Europe. These slugs excrete a sticky, yellow-orange slime that adheres well to wet surfaces.

That characteristic intrigued Li and his colleagues, and they set to work making an artificial version of the slime. The key, Li told Live Science, is that the slime is made up of long, straight chains of molecules called polymers, which are also bound to each other — a phenomenon called cross-linking. Cross-linking makes materials strong, but the slug slime has the added advantage of having two types of cross-link bonds. Some were covalent bonds, which means they hold molecules together by sharing electrons. Others were ionic bonds, meaning one molecule hands over its electrons to another. These “hybridized” cross-links make the slug mucusboth tough and stretchy, Li said.

The team mimicked this structure using artificial polymers layered onto what they called a “dissipative matrix.” The polymers provide the sticking power, Li explained, while the dissipative-matrix layer acts like a shock absorber: It can stretch and deform without rupturing.

To test the glue, the researchers applied it to pig skin, cartilage, arteries, liver tissue and hearts — including hearts that were inflated with water or air and covered in blood. The material proved extremely stretchable, expanding 14 times its original length without ever breaking loose from the liver tissue. When used to patch a hole in a pig heart, the adhesive maintained its seal even when it was stretched to twice its original length tens of thousands of times, at pressures exceeding normal human blood pressure.

The researchers even applied the adhesive to the beating heart of a real pig and found that the adhesion to the dancing, bloody surface was about eight times as strong as the adhesion of any commercially available surgical glue.

The glue was also tested in a living rat: The researchers simulated an emergency surgery by slicing the rats’ liver tissue and then patching the wound with either the glue or a standard blood-staunching product called Surgiflo. They found that the new adhesive was as good at stopping the blood flow as the standard glue; the rats treated with the new glue experienced no additional hemorrhaging up to two weeks after the surgery. The Surgiflo-treated rats, however, sometimes suffered from tissue death and scar tissue, the researchers reported. The rats treated with the slime-inspired glue did not experience these side effects.

Whether the new glue makes it to the operating room depends on much more extensive clinical testing, Li said, but the adhesive could make its debut as a new method of dressing external wounds on a shorter timeline than that.

“We have a company working on trying to push our material to clinical applications, and we have a patent pending,” Li said.

]]>
http://cceblog.com/slug-inspired-glue-patches-beating-hearts/feed/ 0
Advanced Vision Algorithm Helps Robots Learn to See in 3D http://cceblog.com/advanced-vision-algorithm-helps-robots-learn-to-see-in-3d/ http://cceblog.com/advanced-vision-algorithm-helps-robots-learn-to-see-in-3d/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/advanced-vision-algorithm-helps-robots-learn-to-see-in-3d/ Continue reading "Advanced Vision Algorithm Helps Robots Learn to See in 3D"

]]>

Robots are reliable in industrial settings, where recognizable objects appear at predictable times in familiar circumstances. But life at home is messy. Put a robot in a house, where it must navigate unfamiliar territory cluttered with foreign objects, and it’s useless.

Now researchers have developed a new computer vision algorithm that gives a robot the ability to recognize three-dimensional objects and, at a glance, intuit items that are partially obscured or tipped over, without needing to view them from multiple angles.

“It sees the front half of a pot sitting on a counter and guesses there’s a handle in the rear and that might be a good place to pick it up from,” said Ben Burchfiel, a Ph.D. candidate in the field of computer vision and robotics at Duke University.

In experiments where the robot viewed 908 items from a single vantage point, it guessed the object correctly about 75 percent of the time. State-of-the-art computer vision algorithms previously achieved an accuracy of about 50 percent.

Burchfiel and George Konidaris, an assistant professor of computer science at Brown University, presented their research last week at the Robotics: Science and Systems Conference in Cambridge, Massachusetts.

Like other computer vision algorithms used to train robots, their robot learned about its world by first sifting through a database of 4,000 three-dimensional objects spread across ten different classes — bathtubs, beds, chairs, desks, dressers, monitors, night stands, sofas, tables, and toilets.

While more conventional algorithms may, for example, train a robot to recognize the entirety of a chair or pot or sofa or may train it to recognize parts of a whole and piece them together, this one looked for how objects were similar and how they differed.

When it found consistencies within classes, it ignored them in order to shrink the computational problem down to a more manageable size and focus on the parts that were different.

For example, all pots are hollow in the middle. When the algorithm was being trained to recognize pots, it didn’t spend time analyzing the hollow parts. Once it knew the object was a pot, it focused instead on the depth of the pot or the location of the handle.

“That frees up resources and makes learning easier,” said Burchfiel.

Extra computing resources are used to figure out whether an item is right-side up and also infer its three-dimensional shape, if part of it is hidden. This last problem is particularly vexing in the field of computer vision, because in the real world, objects overlap.

To address it, scientists have mainly turned to the most advanced form of artificial intelligence, which uses artificial neural networks, or so-called deep-learning algorithms, because they process information in a way that’s similar to how the brain learns.

Although deep-learning approaches are good at parsing complex input data, such as analyzing all of the pixels in an image, and predicting a simple output, such as “this is a cat,” they’re not good at the inverse task, said Burchfiel. When an object is partially obscured, a limited view — the input — is less complex than the output, which is a full, three-dimensional representation.

The algorithm Burchfiel and Konidaris developed constructs a whole object from partial information by finding complex shapes that tend to be associated with each other. For instance, objects with flat square tops tend to have legs. If the robot can only see the square top, it may infer the legs.

“Another example would be handles,” said Burchfeil. “Handles connected to cylindrical drinking vessels tend to connect in two places. If a mug shaped object is seen with a small nub visible, it is likely that that nub extends into a curved, or square, handle.”

Once trained, the robot was then shown 908 new objects from a single viewpoint. It achieved correct answers about 75 percent of the time. Not only was the approach more accurate than previous methods, it was also very fast. After a robot was trained, it took about a second to make its guess. It didn’t need to look at the object from different angles and it was able to infer parts that couldn’t be seen.

This type of learning gives the robot a visual perception that’s similar to the way humans see. It interprets objects with a more generalized sense of the world, instead of trying to map knowledge of identical objects onto what it’s seeing.

Burchfiel said he wants to build on this research by training the algorithm on millions of objects and perhaps tens of thousands of types of objects.

“We want to build this is into single robust system that could be the baseline behind a general robot perception scheme,” he said.

]]>
http://cceblog.com/advanced-vision-algorithm-helps-robots-learn-to-see-in-3d/feed/ 0
Lego Boost Review: The Best Robot Kit for Kids http://cceblog.com/lego-boost-review-the-best-robot-kit-for-kids/ http://cceblog.com/lego-boost-review-the-best-robot-kit-for-kids/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/lego-boost-review-the-best-robot-kit-for-kids/ Continue reading "Lego Boost Review: The Best Robot Kit for Kids"

]]>

Toys that teach kids to code are as hot in 2017 as Cabbage Patch Kids were in 1983, and for good reason. For today’s generation of children, learning how to program is even more important than studying a second language. Though there are many robot kits on the market that are designed for this purpose, Lego Boost is the best tech-learning tool we’ve seen for kids. Priced at a very reasonable $159, Boost provides the pieces to build five different robots, along with an entertaining app that turns learning into a game that even preliterate children can master.

Boost comes with a whopping 847 different Lego bricks, along with one motor (which also serves as a dial control on some projects), one light/IR sensor and the Move Hub, a large white and gray brick with two built-in motors that serves as the central processing unit for the robot. The Hub connects to your tablet via Bluetooth, to receive your programming code, and to the other two electronic components via wires.

You can build five different robots with the kit: a humanoid robot named Vernie, Frankie the Cat, the Guitar 4000 (which plays real music), a forklift called the “M.I.R. 4” and a robotic “Auto Builder” car factory. Lego said that it expects most users to start with Vernie, who looks like a cross between film robots Johnny No. 5 and Wall-E and offers the most functionality.

To get started building and coding, kids have to download the Boost app to their iPad or Android tablets. You’ll need to have the app running and connected to the Move hub every time you use the robot. All of the processing and programming takes place on your mobile device, and the sound effects (music, the robot talking) will come out of your tablet’s speaker, not the robot itself.

Lego really understands how young children learn and has designed the perfect interface for them. The Boost app strikes a balance among simplicity, depth and fun. Boost is officially targeted at 7- to 12-year-olds, but the software is so intuitive and engaging that, within minutes of seeing the system, my 5-year-old was writing his own programs and begging me to extend his bedtime so he could discover more.

Neither the interface nor the block-based programming language contains any written words, so even children who can’t read can use every feature of the app. When you launch Boost, you’re first shown a cartoonish menu screen that looks like a room with all the different possible robots sitting in different spots. You just tap on the image of the robot you want to build or program, and you’re given a set of activities that begin with building the most basic parts of the project and coding them.

As you navigate through the Boost program, you need to complete the simplest levels within each robot section before you can unlock the more complicated ones. Any child who has played video games is familiar with and motivated by the concept of unlocking new features by successfully completing old ones. This level-based system turns the entire learning process into a game and also keeps kids from getting frustrated by trying advanced concepts before they’re ready.

Boost runs on modern iPads or Android devices that have at least a 1.4-GHz CPU, 1GB of RAM, Bluetooth LE, and Android 5.0 or above. (I also downloaded Boost to a smartphone, but the screen was so small that it was difficult to make out some of the diagrams.)

Unfortunately, Lego doesn’t plan to list the program in Amazon’s app store, which means you can’t easily use Boost with a Fire tablet, which is the top-selling tablet in the U.S. I was able to sideload Boost onto my son’s Fire 7 Kids Edition, but most users won’t have the wherewithal to do that. Lego makes its Mindstorm app available to Fire devices, so we hope the company will eventually see fit to do the same with Boost.

When you load the Boost app for the first time, you need to complete a simple project that involves making a small buggy before you can build any of the five robots. This initial build is pretty fast, because it involves only basic things like putting wheels onto the car, programming it to move forward and attaching a small fan in the back.

Like the robot projects that come after it, the buggy build is broken down into three separate challenges, each of which builds on the prior one. The first challenge involves building the buggy and programming it to roll forward. Subsequent challenges involve programming the vehicle’s infrared sensor and making the fan in the back move.

After you’ve completed all three buggy challenges, the five regular robots are unlocked. Each robot has several levels within it, each of which contains challenges that you must complete. For example, Vernie’s first level has three challenges that help you build him and use his basic functions, while the second level has you add a rocket launcher to his body and program him to shoot.

If a challenge includes building or adding blocks to a robot, it gives you step-by-step instructions that show you which blocks go where, and only after you’ve gone through these steps do you get to the programming portion.

When it’s time to code, the app shows animations of a finger dragging the coding blocks from a palette on the bottom of the screen up onto the canvas, placing them next to each other and hitting a play button to run the program. This lets the user know exactly what to do at every step, but also offers the ability to experiment by modifying the programs at the end of each challenge.

In Vernie’s case, each of the first-level challenges involve building part of his body. Lego Design Director Simon Kent explained to us that, because a full build can take hours, the company wants children to be able to start programming before they’re even finished. So, in the first challenge, you build the head and torso, then program him to move his neck, while in the later ones, you add his wheels and then his arms.

Like almost all child-coding apps, Boost uses a pictorial, block-based programming language that involves dragging interlocking pieces together, rather than keying in text. However, unlike some programming kits we’ve seen, which require you to read text on the blocks to find out what they do, Boost’s system is completely icon-based, making it ideal for children who can’t read (or can’t read very well) yet.

For example, instead of seeing a block that says, “Move Forward” or “Turn right 90 degrees,” you see blocks with arrows on them. All of the available blocks are located on a palette at the bottom of the screen; you drag them up onto the canvas and lock them together to write programs.

Some of the icons on the blocks are less intuitive than an arrow or a play button, but Boost shows you (with an animation) exactly which blocks you need in order to complete each challenge. It then lets you experiment with additional blocks to see what they do.

What makes the app such a great learning tool is that it really encourages and rewards discovery. In one of the first Vernie lessons, there were several blocks with icons showing the robot’s head at different angles. My son was eager to drag each one into a program to see exactly what it did (most turned the neck).

Programs can begin with either a play button, which just means “start this action” or a condition such as shaking Vernie’s hand or putting an object in front of the robot’s infrared sensor. You can launch a program, either by tapping on its play/condition button or on the play button in the upper right corner of the screen, which runs every program you have on screen at once.

Because the programs are mostly so simple, there are many reasons why you might want to have several running at once. For example, when my son was programming for the guitar robot, he had a program that played a sound when the slider on the neck passed over the red tiles, another one for when it passed over the green tiles and yet another for the blue tiles. In a complex adult program, these would be handled by an if/then statement, but in Boost, there are few loops (you can use them in the Creative Canvas free-play mode if you want), so making several separate programs is necessary.

While the program(s) run, each block lights up as it executes, so you know exactly what’s going on at any time. You can even add and remove blocks, and the programs will keep on executing. I wish all the adult programming tools I use at work had these features!

Though you write programs as part of each the challenges, if you really want to get creative, you need to head to the Coding Canvas mode. In each robot’s menu, to the right of the levels, there’s a red toolbox that you can tap on to write your own custom programs. As you complete different challenges that feature new functions, your Coding Canvas toolbox gets filled up with more code blocks that you can use.

My son had an absolute blast using the Guitar 4000’s toolbox mode to write a program in which moving the slider over the different colors on the guitar neck would play different clips of his voice.

Users who want to build their own custom robots and program them can head over to the Creative Canvas free-play mode by tapping on the open-window picture on the main menu. There, you can create new programs with blocks that control exactly what the Move Hub, IR sensor and motor do. So, rather than showing an icon with a block of a guitar playing like it does from within the Guitar 4000 menus, Boost shows a block with a speaker on it, because you can choose any type of sound from your custom robot.

In both Creative Canvas and Coding Canvas modes, Lego makes it easy to save your custom programs. The software automatically assigns names (which, coincidentally, are the names of famous Lego characters) and colorful icons to each of your programs for you, but children who can read and type are free to alter the names. All changes to programs are autosaved, so you never have to worry about losing your work.

As you might expect from Lego, Boost offers a best-in-class building experience with near-infinite expandability and customization. The kit comes with 847 Lego pieces, which include a combination of traditional-style bricks, with their knobs and grooves, and Technics-style bricks that use holes and plugs.

The building process for any of the Boost robots (Vernie, Frankie the Cat, M.I.R. 4, Guitar 4000 and Auto Builder) is lengthy but very straightforward. During testing, we built both Vernie and the Guitar 4000 robots, and each took around 2 hours for adults to complete. Younger kids, who have less patience and worse hand-eye coordination, will probably need help from an adult or older child, but building these bots provides a great opportunity for parent/child bonding time. My 5 year old (2 years below the recommended age) and I had a lot of fun putting the guitar together.

As part of the first challenge (or first several challenges), the app gives you a set of step-by-step instructions that show which bricks to put where. The illustrated instruction screens are very detailed and look identical to the paper Lego instructions you may have seen on any of the company’s kits. I just wish that the app made these illustrations 3D so one could rotate them and see the build from different angles like you can on UBTech’s Jimu Robots kit app.

All of the bricks connect together seamlessly and will work with any other bricks you already own. You could also easily customize one of the five recommended Boost robots with your own bricks. Imagine adorning Varney’s body with pieces from a Star Wars set or letting your Batman minifig ride on the MIR 4 forklift.

I really love the sky-blue, orange and gray color scheme Lego chose for the bricks that come with Boost, because it has an aesthetic that looks both high-tech and fun. From the orange wings on the Guitar 4000 robot to Vernie’s funky eyebrows, everything about the blocks screams “fun” and “inviting.”

At $159, the Lego Boost offers more for the money than any of the other robot kits we’ve reviewed, but it’s definitely designed for younger children who are new to programming. Older children or those who’ve used Boost for a while can graduate to Lego’s own Mindstorm EV3 kits, which start at $349 and use their own block-based coding language.

Starting at $129, UBTech’s line of Jimu robots offer a few more sensors and motors than Boost, along with a more complex programming language, but they definitely target older and more experienced kids, and to get a kit that makes more than one or two robots, you need to spend over $300. Sony’s Koov kit is also a good choice for older and more tech-savvy children, but it’s also way more expensive than Boost (starts at $199, but you need to spend at least $349 to get most features), and its set of blocks is much less versatile than Legos.

Tenka Labs’ Circuit Cubes start at just $59 and provide a series of lights and motors that come with Lego-compatible bricks, but these kits teach electronics skills, not programming.

The best robot/STEM kit we’ve seen for younger children, Lego Boost provides turns coding into a game that’s so much fun your kids won’t even know that they’re gaining valuable skills. Because it uses real Legos, Boost also invites a lot of creativity and replayability, and at $159, it’s practically a steal.

It’s a shame that millions of kids who use Amazon Fire tablets are left out of the Boost party, but hopefully, Lego will rectify this problem in the near future. Parents of older children with more programming savvy might want to consider a more complex robot set such as Mindstorms or Koov, but if your kid is new to coding and has access to a compatible device, the Boost is a must-buy.

]]>
http://cceblog.com/lego-boost-review-the-best-robot-kit-for-kids/feed/ 0
Scientists Edit Human Embryo: This Is Why Designer Babies Are a Ways Off http://cceblog.com/scientists-edit-human-embryo-this-is-why-designer-babies-are-a-ways-off/ http://cceblog.com/scientists-edit-human-embryo-this-is-why-designer-babies-are-a-ways-off/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/scientists-edit-human-embryo-this-is-why-designer-babies-are-a-ways-off/ Continue reading "Scientists Edit Human Embryo: This Is Why Designer Babies Are a Ways Off"

]]>

The announcement by researchers in Portland, Oregon that they’ve successfully modified the genetic material of a human embryo took some people by surprise.

With headlines referring to “groundbreaking” research and “designer babies,” you might wonder what the scientists actually accomplished. This was a big step forward, but hardly unexpected. As this kind of work proceeds, it continues to raise questions about ethical issues and how we should we react.

For a number of years now we have had the ability to alter genetic material in a cell, using a technique called CRISPR.

The DNA that makes up our genome comprises long sequences of base pairs, each base indicated by one of four letters. These letters form a genetic alphabet, and the “words” or “sentences” created from a particular order of letters are the genes that determine our characteristics.

Sometimes words can be “misspelled” or sentences slightly garbled, resulting in a disease or disorder. Genetic engineering is designed to correct those mistakes. CRISPR is a tool that enables scientists to target a specific area of a gene, working like the search-and-replace function in Microsoft Word, to remove a section and insert the “correct” sequence.

In the last decade, CRISPR has been the primary tool for those seeking to modify genes – human and otherwise. Among other things, it has been used in experiments to make mosquitoes resistant to malaria, genetically modify plants to be resistant to disease, explore the possibility of engineered pets and livestock, and potentially treat some human diseases (including HIV, hemophilia and leukemia).

Up until recently, the focus in humans has been on changing the cells of a single individual, and not changing eggs, sperm and early embryos – what are called the “germline” cells that pass traits along to offspring. The theory is that focusing on non-germline cells would limit any unexpected long-term impact of genetic changes on descendants. At the same time, this limitation means that we would have to use the technique in every generation, which affects its potential therapeutic benefit.

Earlier this year, an international committee convened by the National Academy of Sciences issued a report that, while highlighting the concerns with human germline genetic engineering, laid out a series of safeguards and recommended oversight. The report was widely regarded as opening the door to embryo-editing research.

That is exactly what happened in Oregon. Although this is the first study reported in the United States, similar research has been conducted in China. This new study, however, apparently avoided previous errors we’ve seen with CRISPR – such as changes in other, untargeted parts of the genome, or the desired change not occurring in all cells. Both of these problems had made scientists wary of using CRISPR to make changes in embryos that might eventually be used in a human pregnancy. Evidence of more successful (and thus safer) CRISPR use may lead to additional studies involving human embryos.

First, this study did not entail the creation of “designer babies,” despite some news headlines. The research involved only early stage embryos, outside the womb, none of which was allowed to develop beyond a few days.

In fact, there are a number of existing limits – both policy-based and scientific – that will create barriers to implanting an edited embryo to achieve the birth of a child. There is a federal ban on funding gene editing research in embryos; in some states, there are also total bans on embryo research, regardless of how funded. In addition, the implantation of an edited human embryos would be regulated under the federal human research regulations, the Food, Drug and Cosmetic Act and potentially the federal rules regarding clinical laboratory testing.

Beyond the regulatory barriers, we are a long way from having the scientific knowledge necessary to design our children. While the Oregon experiment focused on a single gene correction to inherited diseases, there are few human traits that are controlled by one gene. Anything that involves multiple genes or a gene/environment interaction will be less amenable to this type of engineering. Most characteristics we might be interested in designing – such as intelligence, personality, athletic or artistic or musical ability – are much more complex.

Second, while this is a significant step forward in the science regarding the use of the CRISPR technique, it is only one step. There is a long way to go between this and a cure for various disease and disorders. This is not to say that there aren’t concerns. But we have some time to consider the issues before the use of the technique becomes a mainstream medical practice.

Taking into account the cautions above, we do need to decide when and how we should use this technique.

Should there be limits on the types of things you can edit in an embryo? If so, what should they entail? These questions also involve deciding who gets to set the limits and control access to the technology.

We may also be concerned about who gets to control the subsequent research using this technology. Should there be state or federal oversight? Keep in mind that we cannot control what happens in other countries. Even in this country it can be difficult to craft guidelines that restrict only the research someone finds objectionable, while allowing other important research to continue. Additionally, the use of assisted reproductive technologies (IVF, for example) is largely unregulated in the U.S., and the decision to put in place restrictions will certainly raise objections from both potential parents and IVF providers.

Moreover, there are important questions about cost and access. Right now most assisted reproductive technologies are available only to higher-income individuals. A handful of states mandate infertility treatment coverage, but it is very limited. How should we regulate access to embryo editing for serious diseases? We are in the midst of a widespread debateabout health care, access and cost. If it becomes established and safe, should this technique be part of a basic package of health care services when used to help create a child who does not suffer from a specific genetic problem? What about editing for nonhealth issues or less serious problems – are there fairness concerns if only people with sufficient wealth can access?

So far the promise of genetic engineering for disease eradication has not lived up to its hype. Nor have many other milestones, like the 1996 cloning of Dolly the sheep, resulted in the feared apocalypse. The announcement of the Oregon study is only the next step in a long line of research. Nonetheless, it is sure to bring many of the issues about embryos, stem cell research, genetic engineering and reproductive technologies back into the spotlight. Now is the time to figure out how we want to see this gene-editing path unfold.

]]>
http://cceblog.com/scientists-edit-human-embryo-this-is-why-designer-babies-are-a-ways-off/feed/ 0
FDA Looking to Move Smokers Toward E-Cigarettes http://cceblog.com/fda-looking-to-move-smokers-toward-e-cigarettes/ http://cceblog.com/fda-looking-to-move-smokers-toward-e-cigarettes/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/fda-looking-to-move-smokers-toward-e-cigarettes/ Continue reading "FDA Looking to Move Smokers Toward E-Cigarettes"

]]>
Image: Woman smoking electronic cigarette

The U.S. Food and Drug Administration aims to reduce nicotine levels in cigarettes while exploring measures to move smokers toward e-cigarettes, in a major regulatory shift announced on Friday that sent traditional cigarette company stocks plunging.

The move means FDA Commissioner Scott Gottlieb has thrown his regulatory weight on the side of those advocating for e-cigarettes in the debate over whether they potentially hold some public health benefits.

Shares of major tobacco companies in the United States and UK slumped in heavy trading volume, with the world’s biggest producers poised to lose about $60 billion of market value.

The FDA’s move extends the timeline for applications for new e-cigarette clearance by the FDA to Aug. 8, 2022, giving e-cigarette companies more time to keep their products on the market before the agency goes into the process of final review. It also gives the FDA more time to set the proper framework for regulating e-cigarettes.

“It’s hard to overstate what this could mean for the companies affected: non-addictive levels of nicotine would likely mean a lot fewer smokers and of those people who do still light up, smoking a lot less,” said Neil Wilson, a senior market analyst with ETX Capital in London.

“This is just the U.S. regulator acting but we can easily see others, particularly in Europe, where regulatory pressures are already extremely high, following suit,” Wilson said.

British American Tobacco shares, trading close to all-time highs, fell as much as 11 percent and were on track for their biggest one-day loss in nearly 18 years.

Altria, which makes the Marlboro brand of cigarettes, fell as much as 16 percent, slipping into the red for the year

 

]]>
http://cceblog.com/fda-looking-to-move-smokers-toward-e-cigarettes/feed/ 0
Can frequent, moderate drinking ward off diabetes? http://cceblog.com/can-frequent-moderate-drinking-ward-off-diabetes/ http://cceblog.com/can-frequent-moderate-drinking-ward-off-diabetes/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/can-frequent-moderate-drinking-ward-off-diabetes/ Continue reading "Can frequent, moderate drinking ward off diabetes?"

]]>

Image result for Can frequent, moderate drinking ward off diabetes?

It’s not every day that medical studies say alcohol could be good for you. People who drink moderately often have a lower risk of developing diabetes than those who never drink, according to a new study published in Diabetologia, the journal of the European Association for the Study of Diabetes.

Men and women who hoist a few glasses three to four days a week have the lowest risks of developing diabetes, Danish researchers found. Compared to people drinking less than one day each week, men who drink frequently had a 27% lower risk while women had a 32% lower risk, the researchers said.
Diabetes is a disease in which blood glucose — sugar — levels are high. When we eat, most of our food is turned into glucose to be burned as energy, with a hormone called insulin helping our cells absorb glucose. People who have diabetes either don’t make enough insulin or don’t use it effectively. As a result, sugar builds up in their blood, leading to health problems.
Past studies consistently showed that light to moderate drinking carried a lower risk of diabetes compared to sobriety, while heavy drinking had an equal or greater risk. Though the World Health Organization reports “harmful use of alcohol” contributes to more than 200 diseases and injuries, it also acknowledges that light to moderate drinking may be beneficial with respect to diabetes.
Since an important relationship exists between drinking and diabetes, Professor Janne Tolstrup and her colleagues from the National Institute of Public Health of the University of Southern Denmark studied the specifics.

How the study worked

They began by gathering data from Danish citizens 18 years old or older who completed the Danish Health Examination Survey. The data set included 28,704 men and 41,847 women — more than 70,000 participants total — who self-reported their drinking habits and other lifestyle details beginning in 2007-2008 and continuing through 2012.
During the study period, 859 men and 887 women developed diabetes.
Overall, those with the lowest risk of developing diabetes were people who drank moderately on a weekly basis, Tolstrup’s analysis showed.
In terms of volume, 14 alcoholic beverages each week for men and nine beverages each week for women yielded the best results: a 43% and 58% lower risk, respectively, compared to non-drinkers, the researchers found.
“In principle we can only say something about the five-year risk from this study,” said Tolstrup in an email. “However, there is no reason to think that results would be different had we had more years of follow-up.” A very long follow-up, for instance, 10 years, would result in drinking and other habits changing and this could “cause more ‘noise’ in results,” said Tolstrup.
In terms of frequency, the lowest risk of diabetes was found among those who drank three to four days each week.
The team also looked at diabetes risk in relation to what people drank.
When it came to beer, men who drank between one and six each week reduced their risk of diabetes by 21% compared to men hoisting less than one beer each week.
For women, the association between beer and diabetes risk was not clear and the same was true for men and spirits. Women, though, appear to have a problematic relationship with spirits. Seven or more drinks of liquor each week was associated with an 83% increased risk of diabetes for women, when compared to women drinking less than one drink of spirits.
There shouldn’t be much emphasis placed on the results for spirits, Tolstrup said, “because few people were drinking a lot of spirits, most were drinking wine and beer.” With 70% of all alcohol drunk by women being wine, the beer results for women are also “unsure.”

The ‘French paradox’

Crunching the numbers for wine drinkers, the team found that moderate to high wine drinking was associated with a lower risk of diabetes.
Men and women who drank seven or more glasses of wine each week had a 25% to 30% lower risk of diabetes compared with those who drank less than one glass.
Dr. Etto Eringa and Dr. EH Serné of VU University Medical Center Amsterdam said “moderate consumption of red wine has been shown to be related to a lower risk of type 2 diabetes (and cardiovascular disease)” in other population studies, as well.
Eringa and Serné, who have researched how red wine relates to insulin resistance, were not involved in the current study.
“The potential benefit of red wine on diabetes and heart attacks has been proposed as a solution to the so-called ‘French paradox,’ the lower risk of heart attacks and diabetes in France despite high consumption of saturated fats (e.g. French cheese),” Eringa and Serné wrote in an email. Studies examining the effects of red wine components on risk factors for type 2 diabetes (such as glucose absorption by muscle tissue) have “largely produced negative results. Therefore the relationship between red wine and health can be explained by a healthier life style of people who drink in a disciplined manner, by unhealthy effects of non-alcoholic beverages such as soda or juices, or both.”
Eringa and Serné believe it is the healthier lifestyle of drinkers, rather than lower consumption of juice and soda, that accounts for the “French paradox.”
“People in the Danish study that drank alcohol more frequently had a healthier diet and had a lower BMI,” they observed.
Since few participants reported binging, the researchers say their finding of no clear link between binge drinking and diabetes risk may be due to low statistical power.

A medical ‘dictum’

Dr. William T. Cefalu, chief scientific, medical and mission officer of the American Diabetes Association, said the new study’s strengths include the large number of people surveyed, but its weaknesses include an inability to control for other risk factors such as diet. Among people with diabetes, excessive drinking increases the risk of high blood sugar and weight gain, he said.
“The Association does not recommend that people with or at risk for diabetes consume alcohol if they don’t already, but if they do, moderate consumption is recognized as generally safe and potentially of some benefit,” said Cefalu.
Dr. Len Horovitz, an internist at Lenox Hill Hospital in New York City, found the report “unsurprising.”
“It’s been kind of a dictum for quite a number of years that people who don’t drink at all don’t live as long as people who drink mildly or moderately,” said Horovitz, who added that “the theory behind that was that mild drinking, at least, was good for lower blood pressure, dilated blood vessels,” and both of these outcomes translate to better overall circulation.
“We have to remember that diabetes is not just a problem with blood sugar, it is a problem of microvascular,” said Horovitz. Microvascular, which relates to the smallest blood vessels, is positively impacted by alcohol.
In terms of research flaws, there’s always the issue of honesty and truth when people self-report their habits, said Horovitz. The authors are also not clear about the “stream of input” — how much body mass index and diet, for example, were taken into account.
“And what about recreational substances?” Horovitz said. “Drinking, recreational drug use, recreational marijuana use, medicinal marijuana use, these are all things that need to be looked at a little more closely, especially as marijuana becomes something that’s more and more legal and more and more medical in its uses.”
In the end, though, the study “generally supports the old notion, again sort of a dictum within medicine, that teetotalers don’t live as long as people who do drink.”
In the United States, the Centers for Disease Control and Prevention reports 23.1 million people have been diagnosed with diabetes, though an additional 7.2 million people are suspected of having the disease. The total, then, is 30.3 million Americans or 9.4% of the population living with diabetes, with type 2 diabetes — the type that can be prevented with a healthy lifestyle — accounting for up to 95% of these cases.
Globally, diabetes among adults over 18 years old has risen from 4.7% or 108 million people in 1980 to 8.5% or 422 million people in 2014, according to the World Health Organization. Diabetes is a major cause of blindness, kidney failure, heart attacks, stroke and lower limb amputation.
Since alcohol is related to other diseases and conditions, “any recommendations about how to drink and how much to drink should not be inferred from this study,” said Tolstrup. She added that the most important finding of her study is that when it comes to the risk of diabetes, drinking a little bit often — instead of drinking a lot rarely — is best.
]]>
http://cceblog.com/can-frequent-moderate-drinking-ward-off-diabetes/feed/ 0
Quantum Cryptography System Breaks Daylight Distance Record http://cceblog.com/quantum-cryptography-system-breaks-daylight-distance-record/ http://cceblog.com/quantum-cryptography-system-breaks-daylight-distance-record/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/quantum-cryptography-system-breaks-daylight-distance-record/ Continue reading "Quantum Cryptography System Breaks Daylight Distance Record"

]]>
An illustration of satellites orbiting earth and connected to each other by beams and arcs of various colors

Satellites can now set up quantum communications links through the air during the day instead of just at night, potentially helping a nigh-unhackable space-based quantum Internet to operate 24/7, a new study from Chinese scientists finds.

Quantum cryptography exploits the quantum properties of particles such as photons to help encrypt and decrypt messages in a theoretically unhackable way. Scientists worldwide are now endeavoring to develop satellite-based quantum communications networks for a global real-time quantum Internet.

However, prior experiments with long-distance quantum communications links through the air were mostly conducted at night because sunlight serves as a source of noise. Previously, “the maximum range for daytime free-space quantum communication was 10 kilometers,” says study co-senior author Qiang Zhang, a quantum physicist at the University of Science and Technology of China, in Shanghai.

Now researchers led by quantum physicist Jian-Wei Pan at the University of Science and Technology of China, at Hefei, have successfully established 53-kilometer quantum cryptography links during the day between two ground stations. This research suggests that such links could work between a satellite and either a ground station or another satellite, they say.

To overcome interference from sunlight, the researchers switched from the roughly 700- to 900-nanometer wavelengths of light used in all prior day-time free-space experiments to roughly 1,550 nm. The sun is about one-fifth as bright at 1,550 nm as it is at 800 nm, and 1,550-nm light can also pass through Earth’s atmosphere with virtually no interference. Moreover, this wavelength is also currently widely used in telecommunications, making it more compatible with existing networks.

Previous research was reluctant to use 1,550-nm light because of a lack of good commercial single-photon detectors capable of working at this wavelength. But the Shanghai group developed a compact single-photon detector for 1,550-nm light that could work at room temperature. Moreover, the scientists developed a receiver that needed less than one tenth of the field of view that receivers for nighttime quantum communications links usually need to work. This limited the amount of noise from stray light by a factor of several hundred.

In experiments, the scientists repeatedly established quantum communications links across Qinghai Lake, the biggest lake in China, from 3:30 p.m. to 5 p.m. local time on several sunny days, achieving transmission rates of 20 to 400 bits per second. Furthermore, they could establish these links despite roughly 48 decibels of loss in their communications channel, which is more than the roughly 40 to 45 dB of loss typically seen in communications channels between satellites and the ground and between low-Earth-orbit satellites, Zhang says. In comparison, previous daytime free-space quantum communications experiments could accommodate roughly only 20 dB of noise.

The researchers note that their experiments were performed in good weather, and that quantum communication is currently not possible in bad weather with today’s technology. Still, they note that bad weather is a problem only for ground-to-space links, and that it would not pose a problem for links between satellites.

In the future, the researchers expect to boost transmission rates and distance using better single-photon detectors, perhaps superconducting ones. They may also seek to exploit the quantum phenomenon known as entanglement to carry out more sophisticated forms of quantum cryptography, although this will require generating very bright sources of entangled photons that can operate in a narrow band of wavelengths, Zhang says.

]]>
http://cceblog.com/quantum-cryptography-system-breaks-daylight-distance-record/feed/ 0
Low-Cost Pliable Materials Transform Glove Into Sign-to-Text Machine http://cceblog.com/low-cost-pliable-materials-transform-glove-into-sign-to-text-machine/ http://cceblog.com/low-cost-pliable-materials-transform-glove-into-sign-to-text-machine/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/low-cost-pliable-materials-transform-glove-into-sign-to-text-machine/ Continue reading "Low-Cost Pliable Materials Transform Glove Into Sign-to-Text Machine"

]]>
Photo: University of California San Diego

Researchers have made a low-cost smart glove that can translate the American Sign Language alphabet into text and send the messages via Bluetooth to a smartphone or computer. The glove can also be used to control a virtual hand.

While it could aid the deaf community, its developers say the smart glove could prove really valuable for virtual and augmented reality, remote surgery, and defense uses like controlling bomb-diffusing robots.

This isn’t the first gesture-tracking glove. There are companies pursuing similar devices that recognize gestures for computer control, à la the 2002 film Minority Report. Some researchers have also specifically developed gloves that convert sign language into text or audible speech.

What’s different about the new glove is its use of extremely low-cost, pliable materials, says developer Darren Lipomi, a nanoengineering professor at the University of California, San Diego. The total cost of the components in the system reported in the journal PLOS ONE cost less than US $100, Lipomi says. And unlike other gesture-recognizing gloves, which use MEMS sensors made of brittle materials, the soft stretchable materials in Lipomi’s glove should make it more robust.

The key components of the new glove are flexible strain sensors made of a rubbery polymer. Lipomi and his team make the sensors by cutting narrow strips from a super-thin film of the polymer and coating them with conductive carbon paint.

Then they use a stretchy glue to attach nine sensors on the knuckles of an athletic leather glove, two on each finger and one on the thumb. Thin, stainless steel threads connect each sensor to a circuit board attached at the wrist. The board also has an accelerometer and a Bluetooth transmitter.

When the wearer bends their fingers, the sensors stretch and the electrical resistance across them goes up. Based on these resistance signals, the circuit assigns a digital bit to each knuckle, 0 for relaxed and 1 for bent. This creates a nine-bit code for each hand gesture of the ASL alphabet. So if all fingers are straight, the code reads 000000000; for a fist it would be 111111111.

To distinguish between ASL letters that generate the same code, the researchers incorporated an accelerometer and pressure sensors on the glove. The letters D and Z, for instance, have the same gesture but the hand zigzags for Z while it remains still for D. In U and V, meanwhile, two fingers are held together and apart respectively, which the pressure sensor detects.

In tests, the glove could translate all 26 letters of the American Sign Language alphabet into text. The research team also used the glove to control a virtual hand to sign the ASL letters.

The next version of the glove will incorporate new materials that generate a tactile response so that wearers can feel what they’re touching in virtual reality. Today’s haptic devices simulate the sense of touch by applying forces and vibrations to the user. Lipomi and his students plan to convey a much broader range of signals. “We’re synthesizing materials that can be used to stimulate everything from pressure and temperature to stickiness and sliminess,” he says.

]]>
http://cceblog.com/low-cost-pliable-materials-transform-glove-into-sign-to-text-machine/feed/ 0
3D-Printed "Earable" Sensor Monitors Vital Signs http://cceblog.com/3d-printed-earable-sensor-monitors-vital-signs/ http://cceblog.com/3d-printed-earable-sensor-monitors-vital-signs/#respond Thu, 18 Jan 2018 12:16:35 +0000 http://cceblog.com/3d-printed-earable-sensor-monitors-vital-signs/ Continue reading "3D-Printed "Earable" Sensor Monitors Vital Signs"

]]>
Illustration: ACS Sensors

Fitness-tracking wristbands and bracelets have mostly been used to count steps and monitor heart rate and vital signs. Now engineers have made a 3D-printed sensor that can be worn on the ear to continuously track core body temperature for fitness and medical needs.

The “earable” also serves as a hearing aid. And it could be a platform for sensing several other vital signs, says University of California Berkeley electrical engineering and computer science professor Ali Javey.

Core body temperature is a basic indicator of health issues such as fever, insomnia, fatigue, metabolic functionality, and depression. Measuring it continuously is critical for infants, elderly and those with severe conditions, says Javey. But wearable sensors available today in the form of wristbands and soft patches monitor skin temperature, which can change with the environment and is usually different from body temperature.

Body temperature can be measured using invasive oral or rectal readings. Ear thermometers measure infrared energy emitted from the eardrum and are easier to use than more invasive devices. That’s the route Javey and his colleagues took for their earable sensor, reported in the journal ACS Sensors.

For a customized fit to an individual’s ear, the team printed their sensor using flexible materials and a 3D printer. First they printed a gauzy, disc-shaped base using a stretchable polymer. This base contains tiny channels into which the researchers inject liquid metal to make electrical interconnects in lieu of metal wires. It also has grooves for an infrared sensor; microprocessors; and a Bluetooth module that transmits temperature readings to a smartphone app. They packaged the gadget in a 3D-printed case.

Because the device covers the ear, it could affect hearing, Javey says. So the engineers also embedded a bone-conduction hearing aid, made of a microphone; data-processing circuitry; a potentiometer for adjusting volume; and an actuator. The actuator sits by the temple and converts sound to vibrations, which are transmitted through the skull bone to the inner ear.

The earable accurately measured the core body temperature of volunteers wearing it in rooms heated or cooled to various temperatures, and while exercising on a stationary bicycle.

“It can be worn continuously for around 12 hours without recharging,” he says. “In the future, power can be further reduced by using lower power electronic components, including the Bluetooth module.”

The researchers plan to increase the device’s functionality by integrating sensors for measuring EEG, heart rate, and blood oxygen level. They also plan to test it in various environments.

]]>
http://cceblog.com/3d-printed-earable-sensor-monitors-vital-signs/feed/ 0
A Revealing Leap Into Avegant’s Magical Mixed-Reality World http://cceblog.com/a-revealing-leap-into-avegants-magical-mixed-reality-world/ http://cceblog.com/a-revealing-leap-into-avegants-magical-mixed-reality-world/#respond Thu, 18 Jan 2018 12:16:08 +0000 http://cceblog.com/a-revealing-leap-into-avegants-magical-mixed-reality-world/ Continue reading "A Revealing Leap Into Avegant’s Magical Mixed-Reality World"

]]>

IEEE Spectrum Senior Editor Tekla Perry, wearing a prototype light field display, is enthralled by a sea turtle swimming on the palm of her hand, observed using a prototype of Avegant's mixed reality technology

I’m generally not the person you want testing your virtual, augmented, or otherwise “enhanced” reality technology. I am horribly susceptible to motion sickness, my presbyopia makes focusing on Google glass–like displays pretty much impossible, and even 3D movies do not make my eyes happy. Using a good virtual reality system, I can go maybe 30 seconds before I have escape to the real world; with a phone-based system, even a couple of seconds is too much.

But last week I spent at least 15 minutes (though it felt like less than five) completely engaged in a sampling of virtual worlds seen through Avegant’s mixed reality viewer. The experience was magical, enthralling, amazing, wonderful—pick your superlative. I didn’t get nauseous, or headachy, or feel any eyestrain at all. Indeed my eyes felt rested (probably because that was 15 minutes not spent in front of a computer or phone screen). Also a wonderful part of the experience: the fact that the company didn’t bother with extreme security measures or nondisclosure agreements (though executives are not talking specific technical details until patent filings are complete.

Avegant is a four-year-old Belmont, Calif., based startup. Its first product, the Glyph head-mounted display typically used for personal entertainment viewing, has been shipping since February of last year. (The name is a mashup of the names of the founders—Edward Tang and Allan Evans.)

The company announced its transparent Light Field Display technology last month. It hasn’t said when this will be ready for manufacture, though Tang points out that the Glyph’s success shows that the company knows how to design products for manufacture and bring them to market.

Avegant’s prototype mixed reality system uses a headband to position the Avegant display. It is driven by an IBM Windows PC with an Intel i7 processor and an Nvidia graphics card running the Unity game engine.

The images, explained cofounder Tang, now chief technology officer, are projected onto the retina by an array of MEMS micromirrors, each of which controls one pixel.

That, so far, is the same as the company’s Glyph system. But unlike a standard micromirror display, which reflects light straight at the person viewing it, these light field images are projected at different angles, mimicking the way light in the real world reflects off objects to hit a person’s eyes. The difference in these angles is particularly dramatic the closer someone is to the object, creating distinct and separate focal planes; the eye naturally refocuses when it moves from one plane to another.

To avoid having the eyes deal with these multiple focal planes, explained Tang, mixed reality systems like Microsoft’s HoloLens tend to keep viewers a meter or two away from objects. Light field technology, however, can use different focal planes for different objects simultaneously, so the user perceives even very close-up objects to be realistic. (Tang makes the case for light field technology in the video below.)

To date, Tang says, most attempts to bring light field technology into head-mounted displays have involved tricky-to-manufacture technology like deformable mirrors or liquid lenses, or approaches that take huge amounts of computing power to operate, like stacked LCDs.

“We created a new method,” he said, “that has no mechanical parts and uses existing manufacturing capabilities, with a level of computation that isn’t particularly high; it can run on standard PCs with graphics cards or mobile chipsets.”

The effect is designed to be natural—that is, you see virtual objects in the same way you normally see real objects, with no eye strain caused from struggling to focus. And, in the demo I was shown, it absolutely was.

I went through two mixed reality experiences in a slightly dim but not dark room with some basic furniture. The room was rigged with off-the-shelf motion tracking cameras to help map my position; the headset I wore was tethered to a PC. After a short calibration effort that allowed me to adjust the display to match the distance between my pupils, I entered a solar system visualization, walking among planets, peering up close at particular features (Earth seemed to be a little smaller than my head in this demo), and leaning even closer to trigger the playing of related audio.

Clear labels hovered near each planet, which brings up an interesting side note: I wasn’t wearing my reading glasses, but the labels, even close at hand, were quite clear. Tang mentioned that the developers have been discussing whether, for those of us who do need reading glasses, it would be more realistic to make the virtual objects as blurry as the real ones. I vote no, I didn’t find it jarring that my hand as I used it to reach for planets was a little fuzzy, particularly, perhaps, since the virtual objects were appearing brighter than real world ones. And it was quite lovely having so much of what I was seeing be clear.

At one point in the demo, while I was checking out asteroids near Saturn, Tang suggested that I step into the asteroid belt. I was a bit apprehensive; with my VR sickness history, it seemed that watching a flow of asteroids whizzing by me on both sides would be a uniquely bad idea, but it went just fine, and I could observe quite a bit of detail in the asteroids as they flowed past me.

The second demo involved a virtual fish tank. Tang asked me to walk over to a coffee table and look down at the surface; the fish tank then appeared, sitting on top of the table. I squatted next to the tank and put my hand into it. I reached out for a sea turtle; it was just the right size to fit in my palm. I followed it with my cupped hand for a while, and started feeling a whoosh of air across my palm whenever it swept its flippers back. I wondered for a moment if there was some virtual touch gear around, but it turned out to just be my mind filling in a few blanks in the very real scene. Tang then expanded the fish tank to fill the room; now that sea turtle was too big to hold, but I couldn’t resist trying to pet it. Then, he told me, “Check out that chair,” and in a moment, a school of tiny fish swept out from under the chair legs and swooped around the nearby furniture.

After convincing me to leave the fish demo (I was enjoying the experience of snorkeling without getting wet), Tang directed me to walk towards a female avatar. She was a computer-generated human that didn’t quite leave the uncanny valley—just a standard videogame avatar downloaded from a library, Tang said. But he pointed out that I could move up and invade her personal space and watch her expression change. And it certainly did seem that this avatar was in the room with me.

Throughout all the demos, I didn’t encounter any vision issues, focus struggles, or other discomfort as I looked back and forth between near and far and real and virtual objects.

I have not been one of the anointed few who have tested Magic Leap’s much-ballyhooed light-field-based mixed reality technology (and given the company’s extreme nondisclosure agreements, I likely couldn’t say much about it if I had). So, I don’t know how Avegant’s approach compares, though I’d be willing to put Avegant’s turtle up against Magic Leap’s elephant any day.

 What I do know is that it absolutely blew me away. I’m eager to see what developers eventually do with it, and I’m thrilled that I no longer have to struggle physically to visit virtual worlds.

]]>
http://cceblog.com/a-revealing-leap-into-avegants-magical-mixed-reality-world/feed/ 0