HP newsroom blog
cancel
Showing results for 
Search instead for 
Did you mean: 
Published: March 24, 2017

 HP IonTouch technology. Imager (left) and Rewritable media (Right)HP IonTouch technology. Imager (left) and Rewritable media (Right)

This month HP Labs pilots a new writable, energy-free display technology that could impact a wide swathe of industries, including finance, hospitality, healthcare, security, retail, and transportation.

HP IonTouch is a secure, integrated system for placing and updating timely, personalized visual information onto digital displays embedded in plastic cards of the size, flexibility, and durability of a standard credit card.

“The IonTouch enables non-contact imaging, removing the electronics from conventional electronic paper displays, including the display backplane that requires electrodes, transistors, interconnects, a battery, and a processor” explains HP IonTouch project director Omer Gila. “This allows us to add a high resolution 2.5” display to each card with only an incremental cost of just a few tens of cents”.

The HP Labs effort is unusual for its technical ambition, requiring innovations in hardware, software, and networking as well as the chemistry and physics of a new kind of media – and for taking the company’s research division into the realm of new business creation.

“Developing it has been a huge but rewarding challenge for everyone involved.” notes Gila.

 

The IonTouch Team. From left to right: Bill Holland, David George, Henryk Birecki, Raj Kelekar, Omer Gila, Napoleon Leoni, Anthony McLennan, Chuangyu Zhou, Rares Vernica, Dekel Green, and Mark Huber. Other key contributors include Marc Ramsey and Michael Lee.The IonTouch Team. From left to right: Bill Holland, David George, Henryk Birecki, Raj Kelekar, Omer Gila, Napoleon Leoni, Anthony McLennan, Chuangyu Zhou, Rares Vernica, Dekel Green, and Mark Huber. Other key contributors include Marc Ramsey and Michael Lee.

A new kind of energy free display

The low-cost, energy free display was developed to work with newly-developed IonTouch imagers and creates an image similar to that produced by tablets like the Amazon Kindle, but without the electronics and that remains permanently present unless reimaged by an IonTouch device.

The display media is embedded into individually identifiable plastic IonTouch cards printed by an HP Indigo digital press, resulting in a unique and portable card-sized display that can be erased and rewritten thousands of times to reflect a balance, status, score, or individualized message tailored to the owner.

The current IonTouch technology offers 300 x 300 dpi resolution display in black and white with 16 levels of gray scale. The 2.5” writable area is large enough to feature a clear photo for ID or entertainment, a QR code, and text information together at the same time.

 

A novel, and affordable imaging ecosystem

To realize their vision, HP Labs researchers also had to create an entirely new imaging device to erase and write onto HP IonTouch cards.

When a card is placed in this imager, a simple bar code on the back of the card uniquely identifies it to the HP IonTouch system, allowing the imaging device to retrieve whatever new information needs to be placed on the card. The imager then erases the card’s current display before printing the new information onto its electronic paper via a floating, non-contact print head in much the same way an HP InkJet head prints ink onto conventional paper – but without the ink. The entire process takes less than four seconds.

“The image can be rewritten more than 10,000 times. Each image can stay as it is printed forever, or until you run the card through the imager again,” says HP IonTouch lead engineer Napoleon Leoni. He also notes that the cards are made to be flexible, durable, water-washable, and impact resistant – and can thus easily handle a pocket or wallet environment.

Crucially, they also cost little to produce. Where competing solutions with a comparable electronic screen size cost more than $50 per card to manufacture, HP IonTouch cards are projected to cost less than a couple of dollars to make.

“That really changes the game and opens up IonTouch cards for use in a wide variety of sectors,” Gila suggests. “Since almost every plastic card in the market can benefit from a writable display, we believe the number of potential applications is almost endless.”

Potential uses include gift cards that display personalized messages and are both refreshable and transferable, security badges that are reauthorized daily, smarter hotel door keys and medical cards, and public transport passes and loyalty cards that update their value with every ride or purchase and include fresh information about the service and discounts or offers that are personalized for the user. The technology also has potential application for other kinds of signage, such as durable, low-cost, rewritable shelf labels of the kind used by pharmacies, grocery stores, and other retailers.

 

A strong environmental and security message

 “Making cards rewriteable makes them reusable. This is good for business but also good for the environment as it eliminates millions of wasted cards every year” says Leoni. “Since the only way to change information on the IonTouch cards is via our IonTouch imagers, that also adds another layer of protection, making the cards very secure, too. Being able to update or rotate security codes boosts the security of credit cards and enables reuse of gift cards, replacing the scratchable or permanent security codes they use today.”

Another environmental benefit stems from the cards’ power consumption – they require just a few watts to be written and no power to retain their images, translating to an annual electric bill of a few cents per imager. This also enables new handheld applications where an HP IonTouch imager runs on single battery charge for a whole day.

Creating this novel ion jet imaging technology was just one of many technical challenges that the team of ten or so HP Labs engineers faced and resolved. 

They also added networking and cloud integration to the system, enabling the Linux-based IonTouch imager to link with customer-owned cloud databases. A retailer, for example, may recognize a customer’s gift card as it runs through the imager, immediately debit it for a purchase, and then print the new balance on the card along with a discount for a product relevant to the customer’s previous buying habits. 

 

A new business category

Recognizing their technology’s potential, the researchers from HP’s Print Adjacencies and 3D Lab teamed up with the company’s operations and supply chain teams and its Strategy and Incubation group to design an entirely new HP business concept around the HP IonTouch system.

That led them to develop imagers that are both extremely reliable and yet are “hot swappable”.  “If you have any problem with an imager, our cloud backup system ensures a fast replacement. Just swap in your spare imager, authorize it with your password or code, and off you go,” Gila explains. “Just send the problem unit back to HP for a replacement.”    

Gila believes that convenience and ease of use will keep card-based services in high demand for the next several decades and notes that despite a rise in new payment methods and technologies, the pre-paid card market is still growing, with more than 10 billion new cards issued each year.

“Almost all of them could be made better with our technology,” he says.

 

The pilot

The HP IonTouch pilot currently underway deploys the technology at HP’s own Palo Alto headquarters buildings, featuring an advanced automated digital badge entry system based on IonTouch technology. It includes a touch screen, an imager, a cloud monitoring system, and prints out unique IonTouch visitor badges. These badges display the visitor’s name, that of their host, the date, the name and logo of their company, and a small icon unique to that day, making it easy for security personnel to confirm whether people are present with permission. The system also links to the company’s calendar software, notifying hosts when their guests have arrived.

“The pilot will give us important visibility and valuable feedback on our business and technology, including the imagers, the cards, our software, the user experience, and ease of use” suggests Gila. “We’re very excited to be able to share it with the world.”

 

 

 

    HP Labs
Published: September 14, 2017

Sound-graph_Immersive-Audio.jpg

Audiophiles know that sound reproduction is improved by adding more speakers to a room and making them larger. But that won’t help make today’s increasingly slim and often tinny-sounding laptops, tablets, and phones sound good.

There is a way, however, to make small devices sound larger and better, enabling a high-quality, immersive audio experience, suggests HP Labs researcher Sunil Bharitkar a member of the Media team in HP’s Emerging Compute Lab.

“We can use software to process the audio signals on HP devices so that they approximate the spatial quality of sound that you hear in a room with a multi-loudspeaker audio system,” he says. “We call it immersive audio.”

While competing approaches offer similar processing techniques, the key to HP’s lies in applying specific audio filters and “transforms” that create natural sounding audio with a low compute complexity.

Bharitkar has been guiding an effort at HP Labs, in partnership with colleagues in HP’s Personal Systems and Print groups spearheaded by Personal Systems Chief Technologist Mike Nash, to use this research to upgrade the audio quality on HP’s mobile and desktop devices.

“Audio is an essential, and often underestimated, component of any technology experience, which is why we’re thrilled to be working in close collaboration with HP Labs to make our devices sounds second to none in the industry,” says Nash.

 

Immersive Audio Flow Chart.png

The team first needed to establish objective metrics against which to measure audio performance on HP devices. Based on the outcome of those measurements, they then started redesigning HP’s audio processing technology from the ground up, an effort that has included creating a novel signal topology and a unique set of audio filters.

Additionally, the researchers are applying machine learning in their audio processing topology to classify the sound content (whether it was a movie, for example, or a song). Furthermore, using machine learning it can be ensured that multiple layers of unnecessary processing are not applied where the content is identified as having already been processed, reducing the signal processing compute load and minimizing artifacts.

 

Head, Torso & Mouth Simulator used by HP Labs for extracting directional cues associated with sound localization, and for speech reproduction.Head, Torso & Mouth Simulator used by HP Labs for extracting directional cues associated with sound localization, and for speech reproduction.This is rapidly taking users towards an experience – delivered either through a device’s small speakers or a set of headphones – that faithfully reproduces the intent of its creator of any kind of audio, from a song recorded in a small studio to a Hollywood blockbuster, while consuming as little processing power as possible.

Thanks to commonalities between internationally standardized testing methodologies used for image and audio quality assessments, the HP team have been able to draw on the experience of HP’s Print Quality Evaluation group to test their improvements, assembling several panels of non-experts to evaluate their innovations..

In an effort led by HP Mobility’s Head of Software, Chris Kruger, the first iterations of HP’s new audio processing algorithms are now being packaged into the Qualcomm Snapdragon audio processing chips used in HP mobile devices. Next up: further refining the technology and adding it to HP’s consumer offerings, and towards that the Labs are working closely with Sound Research, an HP partner, for integration.

Published: August 14, 2017

HP Labs intern Swetha RevanurHP Labs intern Swetha Revanur

We first met with Swetha Revanur last summer, when she was a recent high school graduate heading for Stanford University and interning in HP’s Emerging Compute Lab on a project that used sensor data to create simulations of how people move around in different living spaces. This year, Revanur is back in the same lab but working on a new challenge. We caught up with her to see how her academic interests have developed over the last twelve months and to learn about what she’s been working on this time around.

HP: First of all, how was your freshman year at Stanford?

I had an amazing freshman year! I’ve met some of the most brilliant people, the classes were just the right amount of challenging, and I joined an acapella group on campus. In December, I also traveled out to Sweden to speak at the 2016 Nobel Prize Ceremonies and meet the laureates. I’m excited to start my sophomore year in September!

HP: Are you still planning to major in computer science?

Yes, that hasn’t changed! When I started at Stanford, I was interested in biocomputation, but my interests have since shifted to artificial intelligence.

HP: What prompted the change?

The decision was actually driven largely by my work at HP Labs last summer where I had a lot of exposure to the algorithmic side of computer science. I think that if I can understand these algorithms and optimize them, I can have a much larger impact in whatever sector I choose to work in. At the end of the day, machine learning can always be applied to health, and it has a huge scope. 

HP: So what are you working on this year?

I’m with the same team in the Emerging Compute Lab, but instead of looking at sensor analytics, I’ve shifted my focus to the intersection of deep learning and robotics. I’m using techniques in reinforcement learning, which lets us train software agents to find the optimal actions to take in specific environments. I’ve developed a hybrid approach that maintains the same performance as state-of-the-art reinforcement learning algorithms, while improving data and cost efficiency.

HP: How’s it going?

Reinforcement learning is a new area of study for me, and so it’s been a fruitful process of self-teaching. Initially, I was wrangling with pages of linear algebra to understand how existing methods work. Once I got my bearings, I was able to point out gaps and come up with optimizations, and now I’ve implemented the algorithm in TensorFlow.

HP: How will you test the new algorithm?

The new hybrid algorithm will be tested in simulation. I’ll start with simple tests with basic software agents. For example, I recently ran a test where a pendulum was trained to stay upright. Gradually, we’ll work up to full humanoid simulations.

HP: Why is HP interested in this work?

A lot of folks in HP Labs are working in a fundamental robotics research space, on projects like mapping, localization, and navigation. My hybrid approach helps cut time and cost requirements in that space. In general, robotics dovetails really well into the social, business, and home application layers that HP is a major player in.

I was invited to speak at the HP Labs global all-employee meeting with our CTO, Shane Wall. The implications of better reinforcement learning are broad, the interest is there, and I’m excited to see where it takes us.

Published: August 09, 2017

HP Labs intern David HoHP Labs intern David Ho

David Ho is about to enter his fifth year in Purdue University’s Ph.D. program in electrical and computer engineering where he specializes in image processing and computer vision research. Ho moved to the US from Gwangju, Korea during high school, and then attended the University of Illinois at Urbana-Champaign to study for both undergraduate and Masters degrees in electrical and computer engineering. This summer, Ho has been working on a collaboration between HP’s Print Software Platform organization and Emerging Compute Lab, called Pixel Intelligence, applying his expertise in image segmentation to the challenge of picking out people in any specific image.

HP: Can you tell us more about your internship project?

I’ve been using deep learning to improve what we call person segmentation, which is where a computer is able to separate the image of a person from any background. Humans can distinguish between different kinds of images very easily. But computers just see images as an array of pixel values. So we need to find ways to make computers “understand” images of people as people.

HP: How have you been doing that?

I’ve been taking several existing data sets of images where we have already established the “ground truth” of the images and using those data sets to teach a computer program what a person looks like. Once it is trained, I input new images and see how well the program can pick people out of them. The idea is to reduce the number of errors we get in doing that, and to be able to do it faster.

HP: How has it been going?

We’ve had some good results. One thing we’ve been able to do is get this running on a webcam camera, so that it can segment out people in every frame it records.

HP: What’s the challenge in doing that?

One is getting it to work for a relatively crude camera. Another, which we’re still working on, is reducing the processing required to do the segmentation. So far we’ve been running it on a processing unit designed for heavy computation. But we’d like to be able to run it on a smaller device.

HP: Will this work feature in your Ph.D. thesis?

Not directly. In my Ph.D., I’m also looking at applying deep learning to image processing, but I’m looking at understanding microscope images and segmenting out different biological structures. So the application is different but the main idea is the same: helping computers to make sense of interesting images.

HP: Is this your first time interning at HP Labs?

Yes, and it’s my first internship in an industrial lab.

HP: What has struck you as different about working in an industrial lab setting?

I’ve been impressed how industrial labs value creating software that anyone can use. My segmentation solution was pretty good, for example, but required a lot of processing power. So my mentor, Dr. Qian Lin, has pushed me to make it smaller so it’s of more value to more people.

Published: July 27, 2017

HP Labs intern Allison MooreHP Labs intern Allison Moore

Allison Moore is a rising senior at Homestead High School in Cupertino, California. She’s a competitive fencer and member of her school’s robotics team. She’s been surprised at how seriously high school interns are taken at HP Labs. “I expected that I’d just be told what to do and not really be involved in developing a study,” Moore says. “But we’re all working together and I have a lot of flexibility to follow my interests in terms of the contribution I’m making.”

HP: So what are you working on this summer?

I’m helping with a user study on self-expression and clothing in HP’s Immersive Experiences Lab. Right now we’re working on developing what we want to ask people. We’re going to have people bring in pictures of different outfits that they wear for different kinds of activities and then talk about items that they use to customize and personalize their appearance in those situations.

HP: Can you explain the thinking behind the study?

People say a lot through what they wear. Sometimes it’s visual, where you are saying it to everybody. Sometimes it’s more private. It can also seem like you are making a trivial decision in deciding what to wear, but it has a big impact on how people look at you and how you feel about yourself. When you wear an item that doesn’t make you feel comfortable, you really notice it and it can change how you behave. We’re interested in that, and in how we can make people feel more comfortable with who they are.

HP: What’s your role in the study?

I’m making props that we’re using to get people thinking about possible applications of personalization and customization using 2D and 3D printing. For example, I just designed some buttons that we’re going to 3D print. I might also be going in the room and asking people questions when we do the study itself.

HP: What are you hoping you’ll find?

I hope we find ways in which people can use printing to express themselves in different situations, even ones where they feel vulnerable. So that even if you are in an environment where you have to wear clothing that you don’t like, you can still express yourself in that environment and feel comfortable in it.

HP: Is interning at HP Labs changing your thinking about what you’d like to major in at college?

It’s definitely helping me figure out the general area I want to go into. And I’m seeing that it’s okay to pursue multiple options, like science and the liberal arts, at the same time. It’s also got me thinking more about what I want out of a career – how do I follow my passions and also make a difference, and what kind of work will I want to come in and do every day?

Published: July 18, 2017

HP Labs intern Michael LudwigHP Labs intern Michael Ludwig“I got really lucky and the project I’m doing here is basically applying my thesis work to 3D printing,” says HP Labs summer intern Michael Ludwig, who uses computer graphics to study the simulation of materials and their appearances and applies those insights to understanding how humans see complex materials.  Ludwig has almost completed his Ph.D. in computer science at the University of Minnesota, from which he also holds a BS in computer science. When not working, he likes to bike, train his dog and write his own computer graphics programs.

HP: Tell us more about the work you are doing at HP Labs.

I study how people see things and how we can model that computationally. When you are thinking about reproducing the appearance of things in 2D, it’s mostly about color and the texture of the paper you are printing on. But with 3D printing, you have to think about color in three dimensions and also surface curvature and geometry, and then the qualities of the different kinds of materials that you are printing to. So when you want to make something look like it does on your monitor, there are lots of ways in which the two might not match. I’m trying to come up with a quantifiable metric for measuring how much they match or not.

HP: What’s the value in doing that?

Right now, when it comes to printing things in 3D you will have errors or defects that may or may not be visible. But the way we measure that accuracy is mostly by eyeballing it and saying, “I think that’s better (or not) than we have done it before.” What I’m doing is trying to put some numbers to that process that line up with the way people see things. Then we can potentially use that as our guide for how “well” something is printed.  

HP: How are you going about creating that metric?

I’m starting with a user study that will collect data about how people see these types of defects in 3D printed objects. Then I’m going to apply a hypothesis from my thesis to see if it fits end models of the data that we collected.

HP: Do you have any results yet?

It’s a bit early for that. I’m still learning about all potential problems that come up in 3D printing. After that, I’ll establish what we’ll ask our human subjects to do and how we’ll accurately measure what they’re seeing, and then figure out how we take that data to establish the metric I’m looking to create.

HP: Will this feed back into your Ph.D. research?

Yes. Back in Minnesota, I’m working on applying the same model to a broader psycho-physical question, looking at variations in appearances across different areas and asking whether it’s possible to create a framework for a general appearance metric. So this work on 3D appearance metrics gives me another instance that will help me figure that out. But even if it only works for 3D printing, it would be a very useful tool for people in that specific field to have.

HP: What other fields could appearance metrics be useful for?

 Automotive technology is a big one, where understanding appearance impacts computer vision for assisted or automated driving technologies and also helps give people a realistic idea of how different paints and finishes would change the look of a car. But really it has use in any industrial design or quality control process where designers work with manufacturers to create a specific visual impact.

HP: How has working at HP Labs changed your perspective on the challenge you are addressing?

It’s been really valuable to see a design-to-manufacture process up close. There are also some very advanced tools here – like one that scans materials and creates a virtual representation of them – that I can see would be able to use metrics like the one I’m trying to come up with.

HP: What have you liked so far about working at HP Labs?

I’ve only had one internship before, which was at Google, and I’ve enjoyed the fact that HP Labs feels much more “scientific.” It’s been really cool to come in to work and have a fully-equipped chemistry lab ten feet from my desk that I can potentially interact with. It’s also been really validating to share my ideas with people here and have them respond so positively.