HP newsroom blog
cancel
Showing results for 
Search instead for 
Did you mean: 
Published: March 24, 2017

 HP IonTouch technology. Imager (left) and Rewritable media (Right)HP IonTouch technology. Imager (left) and Rewritable media (Right)

This month HP Labs pilots a new writable, energy-free display technology that could impact a wide swathe of industries, including finance, hospitality, healthcare, security, retail, and transportation.

HP IonTouch is a secure, integrated system for placing and updating timely, personalized visual information onto digital displays embedded in plastic cards of the size, flexibility, and durability of a standard credit card.

“The IonTouch enables non-contact imaging, removing the electronics from conventional electronic paper displays, including the display backplane that requires electrodes, transistors, interconnects, a battery, and a processor” explains HP IonTouch project director Omer Gila. “This allows us to add a high resolution 2.5” display to each card with only an incremental cost of just a few tens of cents”.

The HP Labs effort is unusual for its technical ambition, requiring innovations in hardware, software, and networking as well as the chemistry and physics of a new kind of media – and for taking the company’s research division into the realm of new business creation.

“Developing it has been a huge but rewarding challenge for everyone involved.” notes Gila.

 

The IonTouch Team. From left to right: Bill Holland, David George, Henryk Birecki, Raj Kelekar, Omer Gila, Napoleon Leoni, Anthony McLennan, Chuangyu Zhou, Rares Vernica, Dekel Green, and Mark Huber. Other key contributors include Marc Ramsey and Michael Lee.The IonTouch Team. From left to right: Bill Holland, David George, Henryk Birecki, Raj Kelekar, Omer Gila, Napoleon Leoni, Anthony McLennan, Chuangyu Zhou, Rares Vernica, Dekel Green, and Mark Huber. Other key contributors include Marc Ramsey and Michael Lee.

A new kind of energy free display

The low-cost, energy free display was developed to work with newly-developed IonTouch imagers and creates an image similar to that produced by tablets like the Amazon Kindle, but without the electronics and that remains permanently present unless reimaged by an IonTouch device.

The display media is embedded into individually identifiable plastic IonTouch cards printed by an HP Indigo digital press, resulting in a unique and portable card-sized display that can be erased and rewritten thousands of times to reflect a balance, status, score, or individualized message tailored to the owner.

The current IonTouch technology offers 300 x 300 dpi resolution display in black and white with 16 levels of gray scale. The 2.5” writable area is large enough to feature a clear photo for ID or entertainment, a QR code, and text information together at the same time.

 

A novel, and affordable imaging ecosystem

To realize their vision, HP Labs researchers also had to create an entirely new imaging device to erase and write onto HP IonTouch cards.

When a card is placed in this imager, a simple bar code on the back of the card uniquely identifies it to the HP IonTouch system, allowing the imaging device to retrieve whatever new information needs to be placed on the card. The imager then erases the card’s current display before printing the new information onto its electronic paper via a floating, non-contact print head in much the same way an HP InkJet head prints ink onto conventional paper – but without the ink. The entire process takes less than four seconds.

“The image can be rewritten more than 10,000 times. Each image can stay as it is printed forever, or until you run the card through the imager again,” says HP IonTouch lead engineer Napoleon Leoni. He also notes that the cards are made to be flexible, durable, water-washable, and impact resistant – and can thus easily handle a pocket or wallet environment.

Crucially, they also cost little to produce. Where competing solutions with a comparable electronic screen size cost more than $50 per card to manufacture, HP IonTouch cards are projected to cost less than a couple of dollars to make.

“That really changes the game and opens up IonTouch cards for use in a wide variety of sectors,” Gila suggests. “Since almost every plastic card in the market can benefit from a writable display, we believe the number of potential applications is almost endless.”

Potential uses include gift cards that display personalized messages and are both refreshable and transferable, security badges that are reauthorized daily, smarter hotel door keys and medical cards, and public transport passes and loyalty cards that update their value with every ride or purchase and include fresh information about the service and discounts or offers that are personalized for the user. The technology also has potential application for other kinds of signage, such as durable, low-cost, rewritable shelf labels of the kind used by pharmacies, grocery stores, and other retailers.

 

A strong environmental and security message

 “Making cards rewriteable makes them reusable. This is good for business but also good for the environment as it eliminates millions of wasted cards every year” says Leoni. “Since the only way to change information on the IonTouch cards is via our IonTouch imagers, that also adds another layer of protection, making the cards very secure, too. Being able to update or rotate security codes boosts the security of credit cards and enables reuse of gift cards, replacing the scratchable or permanent security codes they use today.”

Another environmental benefit stems from the cards’ power consumption – they require just a few watts to be written and no power to retain their images, translating to an annual electric bill of a few cents per imager. This also enables new handheld applications where an HP IonTouch imager runs on single battery charge for a whole day.

Creating this novel ion jet imaging technology was just one of many technical challenges that the team of ten or so HP Labs engineers faced and resolved. 

They also added networking and cloud integration to the system, enabling the Linux-based IonTouch imager to link with customer-owned cloud databases. A retailer, for example, may recognize a customer’s gift card as it runs through the imager, immediately debit it for a purchase, and then print the new balance on the card along with a discount for a product relevant to the customer’s previous buying habits. 

 

A new business category

Recognizing their technology’s potential, the researchers from HP’s Print Adjacencies and 3D Lab teamed up with the company’s operations and supply chain teams and its Strategy and Incubation group to design an entirely new HP business concept around the HP IonTouch system.

That led them to develop imagers that are both extremely reliable and yet are “hot swappable”.  “If you have any problem with an imager, our cloud backup system ensures a fast replacement. Just swap in your spare imager, authorize it with your password or code, and off you go,” Gila explains. “Just send the problem unit back to HP for a replacement.”    

Gila believes that convenience and ease of use will keep card-based services in high demand for the next several decades and notes that despite a rise in new payment methods and technologies, the pre-paid card market is still growing, with more than 10 billion new cards issued each year.

“Almost all of them could be made better with our technology,” he says.

 

The pilot

The HP IonTouch pilot currently underway deploys the technology at HP’s own Palo Alto headquarters buildings, featuring an advanced automated digital badge entry system based on IonTouch technology. It includes a touch screen, an imager, a cloud monitoring system, and prints out unique IonTouch visitor badges. These badges display the visitor’s name, that of their host, the date, the name and logo of their company, and a small icon unique to that day, making it easy for security personnel to confirm whether people are present with permission. The system also links to the company’s calendar software, notifying hosts when their guests have arrived.

“The pilot will give us important visibility and valuable feedback on our business and technology, including the imagers, the cards, our software, the user experience, and ease of use” suggests Gila. “We’re very excited to be able to share it with the world.”

 

 

 

    HP Labs
Published: July 27, 2017

HP Labs intern Allison MooreHP Labs intern Allison Moore

Allison Moore is a rising senior at Homestead High School in Cupertino, California. She’s a competitive fencer and member of her school’s robotics team. She’s been surprised at how seriously high school interns are taken at HP Labs. “I expected that I’d just be told what to do and not really be involved in developing a study,” Moore says. “But we’re all working together and I have a lot of flexibility to follow my interests in terms of the contribution I’m making.”

HP: So what are you working on this summer?

I’m helping with a user study on self-expression and clothing in HP’s Immersive Experiences Lab. Right now we’re working on developing what we want to ask people. We’re going to have people bring in pictures of different outfits that they wear for different kinds of activities and then talk about items that they use to customize and personalize their appearance in those situations.

HP: Can you explain the thinking behind the study?

People say a lot through what they wear. Sometimes it’s visual, where you are saying it to everybody. Sometimes it’s more private. It can also seem like you are making a trivial decision in deciding what to wear, but it has a big impact on how people look at you and how you feel about yourself. When you wear an item that doesn’t make you feel comfortable, you really notice it and it can change how you behave. We’re interested in that, and in how we can make people feel more comfortable with who they are.

HP: What’s your role in the study?

I’m making props that we’re using to get people thinking about possible applications of personalization and customization using 2D and 3D printing. For example, I just designed some buttons that we’re going to 3D print. I might also be going in the room and asking people questions when we do the study itself.

HP: What are you hoping you’ll find?

I hope we find ways in which people can use printing to express themselves in different situations, even ones where they feel vulnerable. So that even if you are in an environment where you have to wear clothing that you don’t like, you can still express yourself in that environment and feel comfortable in it.

HP: Is interning at HP Labs changing your thinking about what you’d like to major in at college?

It’s definitely helping me figure out the general area I want to go into. And I’m seeing that it’s okay to pursue multiple options, like science and the liberal arts, at the same time. It’s also got me thinking more about what I want out of a career – how do I follow my passions and also make a difference, and what kind of work will I want to come in and do every day?

Published: July 18, 2017

HP Labs intern Michael LudwigHP Labs intern Michael Ludwig“I got really lucky and the project I’m doing here is basically applying my thesis work to 3D printing,” says HP Labs summer intern Michael Ludwig, who uses computer graphics to study the simulation of materials and their appearances and applies those insights to understanding how humans see complex materials.  Ludwig has almost completed his Ph.D. in computer science at the University of Minnesota, from which he also holds a BS in computer science. When not working, he likes to bike, train his dog and write his own computer graphics programs.

HP: Tell us more about the work you are doing at HP Labs.

I study how people see things and how we can model that computationally. When you are thinking about reproducing the appearance of things in 2D, it’s mostly about color and the texture of the paper you are printing on. But with 3D printing, you have to think about color in three dimensions and also surface curvature and geometry, and then the qualities of the different kinds of materials that you are printing to. So when you want to make something look like it does on your monitor, there are lots of ways in which the two might not match. I’m trying to come up with a quantifiable metric for measuring how much they match or not.

HP: What’s the value in doing that?

Right now, when it comes to printing things in 3D you will have errors or defects that may or may not be visible. But the way we measure that accuracy is mostly by eyeballing it and saying, “I think that’s better (or not) than we have done it before.” What I’m doing is trying to put some numbers to that process that line up with the way people see things. Then we can potentially use that as our guide for how “well” something is printed.  

HP: How are you going about creating that metric?

I’m starting with a user study that will collect data about how people see these types of defects in 3D printed objects. Then I’m going to apply a hypothesis from my thesis to see if it fits end models of the data that we collected.

HP: Do you have any results yet?

It’s a bit early for that. I’m still learning about all potential problems that come up in 3D printing. After that, I’ll establish what we’ll ask our human subjects to do and how we’ll accurately measure what they’re seeing, and then figure out how we take that data to establish the metric I’m looking to create.

HP: Will this feed back into your Ph.D. research?

Yes. Back in Minnesota, I’m working on applying the same model to a broader psycho-physical question, looking at variations in appearances across different areas and asking whether it’s possible to create a framework for a general appearance metric. So this work on 3D appearance metrics gives me another instance that will help me figure that out. But even if it only works for 3D printing, it would be a very useful tool for people in that specific field to have.

HP: What other fields could appearance metrics be useful for?

 Automotive technology is a big one, where understanding appearance impacts computer vision for assisted or automated driving technologies and also helps give people a realistic idea of how different paints and finishes would change the look of a car. But really it has use in any industrial design or quality control process where designers work with manufacturers to create a specific visual impact.

HP: How has working at HP Labs changed your perspective on the challenge you are addressing?

It’s been really valuable to see a design-to-manufacture process up close. There are also some very advanced tools here – like one that scans materials and creates a virtual representation of them – that I can see would be able to use metrics like the one I’m trying to come up with.

HP: What have you liked so far about working at HP Labs?

I’ve only had one internship before, which was at Google, and I’ve enjoyed the fact that HP Labs feels much more “scientific.” It’s been really cool to come in to work and have a fully-equipped chemistry lab ten feet from my desk that I can potentially interact with. It’s also been really validating to share my ideas with people here and have them respond so positively.

Published: July 13, 2017

Jaime Machado Neto is a firmware engineer with HP’s 3D Printing business unit in Barcelona Spain and a leading contributor to MatCap3D codebase.  He is holding a stochastic lattice structure he designed and processed with MatCap3D and printed with HP’s JetFusion printer.Jaime Machado Neto is a firmware engineer with HP’s 3D Printing business unit in Barcelona Spain and a leading contributor to MatCap3D codebase. He is holding a stochastic lattice structure he designed and processed with MatCap3D and printed with HP’s JetFusion printer.3D printing could potentially transform the global manufacturing landscape. But for that to happen, the 3D print community must first solve a major data pipeline challenge: speeding the processing of complex designs into machine instructions for 3D printers.  

New 3D printing methods, such as HP’s Multi Jet Fusion technology, let designers work with complex internal structures and meta-materials that are impossible to fabricate with traditional methods, notes Jun Zeng, a senior researcher in HP Labs’ fabrication technology group.  

“But it takes a lot of information to describe not only the shape but also the interior composition of a complex part,” he explains. “Additionally, the printer needs to compute auxiliary data tailored to the printing physics to ensure the physical parts that are printed match the original design.”

New research conducted by Zeng and HP Labs colleagues points to a promising approach for managing these very complex files, work now manifested in a tool kit of experimental algorithms that is helping HP’s 3D Print business group ready the future generation of HP 3D printers. 

 

Trillions of voxels

Complex objects can be presented by collection of voxels, or volumetric pixels. Each voxel can record the intended properties of the object at that specific point, such as variations in color, elasticity, strength, and even conductivity of the printed material, adding to the file’s size.

“Using voxels as data containers is not only intuitive but also very flexible,” notes Zeng. “But it also means that we have a lot of voxels that need to be dealt with.”

A colorful dragon designed with complex internal lattice structures shared by Zeng, for example, is just a few centimeters across when printed but described by a file structure with an addressability of 1 billion voxels. 

HP 3D printers already have fabrication chambers larger than a cubic foot that can fabricate hundreds of parts in the same build, and at a finest resolution up to 1,200 dots per inch, each of which can be represented by a single voxel.

“Once designers start to exploit the full voxel addressability afforded by these types of printers,” Zeng suggests, “we will be working with files that need to address tens of billions, and even a trillion, voxels.”

Files of this size present two challenges in particular. Firstly, to be moved, stored and otherwise manipulated effectively, they need to be reduced in size. But at the same time, it must be possible to reach each voxel and its neighbors quickly in order to generate machine instructions fast enough to feed them to the printer without causing a bottleneck in the printing process.

Intended variations in an object’s properties – where it gradually gets softer, for example, or where it grows in flexibility – also impact the instructions that must be sent to the 3D print head for each specific voxel, further complicating the processing that must occur for the design to be printed as required.

“The big research challenge here comes down to how you structure the voxel data to enable both efficient compression and fast processing, which is also influenced by the computing architecture that you choose to do the voxel processing,” says Zeng.

 

New approaches, and a new toolkit

Zeng and colleagues at HP Labs believe one viable option lies in deploying new kinds of parallel processing using both basic computer chips (CPUs) and GPUs, computer chips initially developed for graphics processing. While CPUs are typically optimized for specific tasks to avoid latency, GPUs are optimized to take on multiple similar but separate tasks at once.

The HP Labs team have been working with academic and industry partners to explore using CPUs and GPUs as co-processors, including collaborating with chip maker NVIDIA.

 

Jun Zeng (right) and Dr. Rama Hoetzlein of NVIDIA at this year’s GPU Technology Conference.Jun Zeng (right) and Dr. Rama Hoetzlein of NVIDIA at this year’s GPU Technology Conference.“Many of the problems that need to be operated on at the voxel level can be worked on in parallel, so the GPU data paradigm fits well,” Zeng says.

One result of this research is a set of experimental algorithms for processing 3D data structures that, for example, exploit parallelism to process voxels in an especially efficient sequence and deploy new mechanisms for describing how very large voxel structures are organized. Through a collaboration with HP’s 3D Printing business unit and HP Brazil’s research and development group, many of these algorithms are now available as a research “tool kit” to the HP developer community.

The tool kit, dubbed “Material Capturer for 3D Printing” or MatCap3D, is constantly being updated and refined following an internal open source model, and HP developers are themselves invited to contribute new code.

“As we look at the future cyber-physical world or what is being referred to as Industry 4.0, HP’s 3D Multi jet Fusion technology shows us that the Art to Part pipeline will result in the processing of Trillions of Voxels to produce structured,  engineered materials. The computing paradigm in this instance will require new computing architectures and (distributed) computing topologies,” says HP’s Chief Engineer and Senior Fellow Chandrakant Patel.

Some of the algorithms developed in the project may find their way into future HP print systems, but their principal value lies in helping explore promising avenues for 3D print file processing, observes Zeng.

“With the progress that we’ve already made, we’re quite encouraged that it will be possible to use this method to process very complex object designs as fast as we need them to be processed,” he says.

Published: July 11, 2017

HP Labs intern Lydia MoogHP Labs intern Lydia MoogSan Francisco native Lydia Moog is a maker and engineer of physical objects, with experience in jewelry making, metalwork, and even making her own shoes. One semester shy of completing her undergraduate degree in mechanical engineering at Brown University, Moog is spending this summer in HP’s Immersive Experiences Lab exploring the intersection of material objects and human personality.

HP: What project are you working on this summer?

I’m part of a team that’s trying to better understand how people express themselves. We’re asking if there are ways to customize technologies so that people can better show off their individuality. So many products today stand at a distance from people’s personalities and we’re hoping to get away from that uniformity and allow people to express themselves through their technology choices. 

HP: How are you doing that?

We’re conducting a user study where we ask users to share different outfits that they wear in different settings, like at home or at work. The idea is to see how they curate their appearance depending on their situation so we can get a sense of where they feel able to express themselves or where they feel restricted. Then we want to see if there’s an object or technology that might allow them to further express themselves within that context. It could be something that is hidden and that only they know about, but that makes them feel more comfortable. Or it could be something more overt like a color, or texture, or photograph that is more open to the public.

HP: What’s your own role in the research?

I’m making props to show the users that are related to HP’s 2D and 3D printing technologies and are examples of ways in which people might express themselves further. These are objects that in the future could be both mass produced and also individually personalized to reflect the person wearing it. I am also helping conduct the study, which will be my first time doing that!

HP: Do you have any results yet?

We’re currently designing the study plan and questions, and then we’ll run through everything with real people in a couple of weeks, so it’s too early to say at the moment.  

HP: Is this kind of research new for you or something you’ve done before?

It’s new to me. I came to HP wanting to make physical objects that helped bridge the boundary between technology and the personal, so this is very much along those lines. But I’ve never done a user study like this or the kind of broad prototyping that we’re doing here.

HP: Is working at HP Labs changing your plans for after college?

I wasn’t thinking about being a researcher before and I’m definitely thinking about it now. I’m also more curious about 3D printing and where it can go in the future, especially in terms of how we might develop more sustainable and environmentally-friendly kinds of technologies. And more generally I’m curious about how we can improve the relationship between people and technology.  

HP: Is HP Labs what you expected it to be?

When I walked in the first day it was not at all what I expected. It’s very community-based and people work together collaboratively to solve problems. They’re open even to interns suggesting ideas and bringing the projects forward. It’s a very warm, welcoming atmosphere – you get the sense that people are interested in helping you advance any idea that you have.

HP: How did you hear about internship opportunities at HP Labs?

While at Brown University I had the opportunity to take several classes in metalsmithing and jewelry at the Rhode Island School of Design. One of my classmates there was Alex Ju, who was an intern here last summer and now works at HP Labs as a researcher. Alex spoke highly of HP and encouraged me to apply for an internship in her lab.

Published: June 29, 2017


From left: Arjun Patel, member of the HP Print Software Platform team, and Dr. Qian Lin, Distinguished Technologist.From left: Arjun Patel, member of the HP Print Software Platform team, and Dr. Qian Lin, Distinguished Technologist.

HP has made a powerful portfolio of computer vision algorithms available to companies looking to turbocharge their ability to make sense of visual data.

 Face detection: Identify and locate human faces in digital imagery.Face detection: Identify and locate human faces in digital imagery.The HP Pixel Intelligence portfolio unlocks a range of capabilities - from accurately locating, analyzing, and grouping faces and objects to automating image layout and cropping.  HP has already used Pixel Intelligence in some of its own leading imaging products. Now other companies can add these same capabilities to their own products and services.

The algorithms are the fruit of a long running HP Labs research program focusing on computer vision, says Dr. Qian Lin, Distinguished Technologist for Computer Vision and Deep Learning Research in HP’s Emerging Compute Lab.

“We’ve been incrementally improving computer vision for a long time,” Dr. Lin notes. “But in the last few years we’ve been applying deep learning, which combines machine learning with artificial neural networks, to the problem and that has allowed us to make enormous strides in the quality of our results.”

The algorithms in the Pixel Intelligence portfolio can find faces within an image or find the same face in multiple images with great accuracy. They can also recognize specific kinds of objects in a set of pictures (all images that feature a specific logo), detect facial attributes such as whether people are smiling or their eyes are open or shut, and sift through hundreds of images to make a collage of the best photos, cropped to fit within a page. Moreover, they work in real time, opening up new avenues for improvement in devices like smart home assistants that are constantly aware of their surroundings.

 Face grouping: Accurately recognize and group individuals.Face grouping: Accurately recognize and group individuals.Some of these capabilities are unique to HP. For example, by installing HP’s Pixel Intelligence software, a print service provider could analyze a large collection of images such as wedding photos in a batch operation, automatically group faces of the same person together with high accuracy, and select the photos with the best facial image quality for inclusion in a wedding photobook. Other capabilities are offered by competing companies, but only through their own cloud servers.

“With Pixel Intelligence, you can host the processing software within your own data center, saving you time and bandwidth while keeping your data secure and private,” notes Dr. Lin.

The Pixel Intelligence portfolio is the result of close collaboration with HP’s Print Software Platform team. It was launched on HP’s developer site last fall and was presented to the Digital Solutions Cooperative (DSCOOP) of HP technology users in Phoenix this spring, with a repeat presentation at DSCOOP in Lyon in early June. It’s already drawing interest from photo services companies interested in increasing personalized print workflows and automating the creation of high quality print products.

“A lot of these companies don’t have the resources for artificial intelligence research, so they are very interested in licensing our technology,” Dr. Lin says. “We’re also hearing from companies in a wide range of industries - retail, health, security, and finance, for example – that want to know more about these capabilities.” 

Computer vision remains a continuing area of interest in HP Labs. Dr. Lin and her colleagues keep improving their existing algorithms and will add any new algorithms to the Pixel Intelligence portfolio.

They are also tackling new challenges, such as improving techniques for identifying 3D objects and applying advanced computer vision to ambient computing technologies that anticipate human needs and proactively address them.

“We are already placing image-gathering sensors in more devices and more places than ever before,” observes Dr. Lin. “So we see a lot of potential for this technology in future smart home, smart office, and mobile applications.”