HP newsroom blog
cancel
Showing results for 
Search instead for 
Did you mean: 
Published: August 09, 2017

HP Labs intern David HoHP Labs intern David Ho

David Ho is about to enter his fifth year in Purdue University’s Ph.D. program in electrical and computer engineering where he specializes in image processing and computer vision research. Ho moved to the US from Gwangju, Korea during high school, and then attended the University of Illinois at Urbana-Champaign to study for both undergraduate and Masters degrees in electrical and computer engineering. This summer, Ho has been working on a collaboration between HP’s Print Software Platform organization and Emerging Compute Lab, called Pixel Intelligence, applying his expertise in image segmentation to the challenge of picking out people in any specific image.

HP: Can you tell us more about your internship project?

I’ve been using deep learning to improve what we call person segmentation, which is where a computer is able to separate the image of a person from any background. Humans can distinguish between different kinds of images very easily. But computers just see images as an array of pixel values. So we need to find ways to make computers “understand” images of people as people.

HP: How have you been doing that?

I’ve been taking several existing data sets of images where we have already established the “ground truth” of the images and using those data sets to teach a computer program what a person looks like. Once it is trained, I input new images and see how well the program can pick people out of them. The idea is to reduce the number of errors we get in doing that, and to be able to do it faster.

HP: How has it been going?

We’ve had some good results. One thing we’ve been able to do is get this running on a webcam camera, so that it can segment out people in every frame it records.

HP: What’s the challenge in doing that?

One is getting it to work for a relatively crude camera. Another, which we’re still working on, is reducing the processing required to do the segmentation. So far we’ve been running it on a processing unit designed for heavy computation. But we’d like to be able to run it on a smaller device.

HP: Will this work feature in your Ph.D. thesis?

Not directly. In my Ph.D., I’m also looking at applying deep learning to image processing, but I’m looking at understanding microscope images and segmenting out different biological structures. So the application is different but the main idea is the same: helping computers to make sense of interesting images.

HP: Is this your first time interning at HP Labs?

Yes, and it’s my first internship in an industrial lab.

HP: What has struck you as different about working in an industrial lab setting?

I’ve been impressed how industrial labs value creating software that anyone can use. My segmentation solution was pretty good, for example, but required a lot of processing power. So my mentor, Dr. Qian Lin, has pushed me to make it smaller so it’s of more value to more people.

    HP Labs
Published: January 08, 2018

 

Pirates_15_CoreSet_Front_IR.jpg

In an era when laptops take pictures, phones track your movements, and digital assistants listen for instructions at home, people are increasingly worried about the sensors they are letting into their lives.

“If you see how many users are doing things like putting tape over the cameras on their laptops, that suggests there's something we can do to help them feel more comfortable,” says Mary Baker, a senior researcher in HP’s Immersive Experiences Lab.  

In response, Baker has been leading an effort at HP Labs to understand what exactly people are concerned about when it comes to interacting with today’s digital devices and to imagine ways in which those concerns might be addressed.

She began with a survey of HP consumers from a wide range of backgrounds, asking them to describe their attitudes to the smart digital assistants that are gaining in popularity with families across the world.

“To me that was a good place to focus because it's a new technology, so a lot of people are thinking about why they might or might not want to adopt it,” Baker notes. “What I found was that while the top reason for not buying an assistant was because people weren’t sure they really needed one, the second biggest was all about security and privacy – the word “creepy” came up in lots of the comments.”

Indeed, it became clear that many people worry that these devices are enabling something or someone to listen in on them or see them without their knowledge.

That spurred a follow-on study where Baker interviewed a smaller group of users in depth about their attitudes to sensing technologies and challenged them to create simple prototypes of devices that would assuage their concerns.

“We wanted to know what it might take for people to just look at a device and know intuitively how private they are with respect to it,” says Baker. “Is it obvious to them how they would control it? Can they trust those indicators and controls?”

Significantly, interviewees felt that an LED “recording” indicator was not something they were able to trust. Instead, they preferred solutions that physically blocked or separated a sensor from a device to indicate that it was not currently in use.

“So while tech companies spend a lot of time trying to hide sensors, users might prefer us to make their behavior more obvious,” Baker suggests.

These insights clearly have implications for any company interested in creating devices that users feel will protect their privacy, and Baker and her HP Labs colleagues have been sharing their findings with HP’s various product groups. Most recently, Baker, along with Jim Mann from the Office of the Chief Engineer, and Cath Sheldon from Customer & Market Insights, led a workshop for teams from across each of the company’s major business units. The workshop, sponsored by Chief Engineer Chandrakant Patel, offered the opportunity to discuss and share information about design features that are most likely to reassure users and has prompted new inventions around sensor privacy.Rotating microphone.Rotating microphone.

She and her colleagues Eric Faggin and Hiro Horii have also shared a variety of conceptual sensor solutions developed by Immersive Experiences Lab engineers in response to her survey research. These include microphone units that must be physically manipulated before they work and clasps that cover cameras when not in use.

While considerations like complexity and manufacturing cost are always major determinants of final designs, teams across HP now have a better understanding of how consumers are likely to respond to sensors in future HP devices.

“We want the users’ experience with HP products to be associated strongly with protection of their privacy,” Baker says. “That’s what this research is all about.” 

Published: November 27, 2017

Multi-jet-fusion printed part on the left and a high resolution scan of the indicated portion of it on the right  showing the micro surface structure used  for authentication.Multi-jet-fusion printed part on the left and a high resolution scan of the indicated portion of it on the right showing the micro surface structure used for authentication.An HP Labs investigation into accurately identifying and authenticating 3D-printed objects is helping enable a future where parts for high performance machines like jet engines are routinely printed to order. It may also aid the development of new systems for tracking physical objects of any kind on a massive scale.

HP Labs Distinguished Technologist Stephen PollardHP Labs Distinguished Technologist Stephen Pollard

 “To use a 3D printed part in a machine like an aero-engine, you need to be able to confidently identify and track that part after it has been printed from a known and trusted printer,” observes Bristol, UK-based researcher Stephen Pollard.

One way to do that would be to add a unique identifier like a bar code to each printed item. But Pollard and his colleagues in HP’s Print Adjacencies and 3D Lab wanted to come up with an approach that added no processing or materials cost to the 3D printing process and that would also have applicability for 3D objects created via more conventional methods.

Their solution: a low cost, three-stage, automated identification and authentication system that doesn’t require a printed object to be readied for authentication in any way.  

It works by first designating a small area of the object to be tracked as the location of a “virtual forensic mark.” This need only be a centimeter or so square and can easily be pre-assigned in the digital version of the 3D object before it is printed.   

Once the item is printed, it is robotically scanned so that the location of the virtual forensic mark can be identified. Finally, a second, very high resolution scanner takes a measurement of that small area. It’s so accurate – detecting surface differences of just two thousandths of a millimeter - that it can establish a unique digital signature for every printed version of an identical 3D object.

With this identifying information on file, the object can be scanned again whenever a confirmation of the object’s specific identity is needed.

“It’s like a fingerprint scanner for physical objects,” says Pollard.

The team has already created prototypes for most of the elements in their system. They next plan to miniaturize and integrate them together into a single prototype device, creating a tool that does the work of instruments that currently cost tens of thousands of dollars for under $100 per machine.HP Labs research engineer Faisal AzharHP Labs research engineer Faisal Azhar

One major challenge will be to place each of these elements together in way that allows the process to be fully automated, adds Labs researcher Faisal Azhar.  

“The other hard problem we face is extracting reliable and repeatable signatures of the 3D parts,” Azhar says. “We are already able to make incredibly accurate scans but those scans need to be reliably repeatable to be confident that the object we identify right after printing is the same object we later want to place, for example, in a machine.”

At present, the system is optimized to scan the surface of objects created by HP 3D printers. But the Labs identification and authentication team plans to expand its capabilities to include objects made from a more diverse array of materials.

More broadly, they are also looking to measure properties of 3D objects beyond their shape, and devise methods for further enhancing production line integration and automated machine interactions with them. “This “forensic” level of authentication and identification will really come into its own when 3D printing moves from prototyping and into production, and manufacturers are printing millions and even billions of copies of any one part,” says Pollard.

Published: October 20, 2017

A speculative wearable device ‘Data Vaporizer’A speculative wearable device ‘Data Vaporizer’In a guest lecture to students, faculty, and interested members of the public on October 26th at the California College of the Arts in San Francisco, HP Labs researcher Ji Won Jun will argue the case for “Design as a Speculative Inquiry.”HP Labs researcher Ji Won JunHP Labs researcher Ji Won Jun

“I’m going to be sharing some examples of my work to show how we can use design to think more creatively about the future and think about technology in a different way,” Jun says.

Too often, Jun believes, we view the likely impact of new technologies either in terms of solving problems with existing tools or through a fantastical lens more suited to science fiction.

“Speculative Design is about challenging our assumptions about why and how we should advance technology,” she notes. “Maybe our aim shouldn’t always be to do things faster or be more productive but instead be more about things like, say, protecting our privacy.”

One of Jun’s early projects – the Data Vaporizer – is a wearable device that does just that by offering protection from hackers. A more recent investigation for the Immersive Experiences Lab, Project Jetty, explores how we can foster stronger emotional connections between people without explicitly needing to make contact with each other.

“The point is to tweak the questions we ask ourselves and, in doing that, to provoke an alternative approach,” Jun suggests. “We’re creating prototype designs that we can share with people and, in measuring their responses to those designs, learn more about what might change as we get people to see technology in a new light.”

Jun’s lecture is part of the California College of the Arts’ annual open house for its MFA program in Design and will feature projects drawn from her own MFA studies at the Art Center College of Design in Pasadena, California and her work in HP’s Immersive Experiences Lab, which she joined in early 2016.

Previously, Jun has presented her work at the 2017 Research Through Design Conference in Edinburgh, UK, and seen it featured in media including Fast Company, Vice magazine’s Creators project and ACM Interactions magazine. She also won the 2016 SXSW Interactive Innovation Award for Student Innovation and received an Art Center Graduate Honors Fellowship.

Jun’s lecture is on Thursday, October 26th at 7:30 PM in the Boardroom at the California College of the Arts (CCA) in San Francisco.

Published: October 20, 2017

HP Labs researcher Sarthak GhoshHP Labs researcher Sarthak Ghosh“In the future, people are going to spend a lot of time in virtual reality environments,” suggests HP Labs researcher Sarthak Ghosh. And they won’t just be using VR for entertainment. “VR will also become a key tool for employees working in fields as diverse as engineering, healthcare, media production, and space science,” Ghosh says.

That begs a question Ghosh first tackled while interning at HP Labs in 2016 as a masters student in Human Computer Interaction at Georgia Tech: how can we ensure that people working in VR environments keep track of what’s going on in the real world, of having a sense of passing time for example?

“If you are making a VR game, you don’t mind if your users are so engrossed in it that they lose track of time,” Ghosh observes. “But if you want people to use VR to do a job, they also need to attend meetings, write up reports, talk with colleagues and more.”

One solution would be to put a real time clock in the VR display that users see. But that takes up valuable visual real estate and taxes a human sense – vision – that is already being worked hard in such a visually immersive environment.

Instead, Ghosh decided to explore using haptic feedback – creating physical sensations with small motors – to offer clues about what’s going on outside the VR experience. Traditionally, haptic feedback has been deployed to make VR feel even more immersive. But could different types of haptic feedback also strengthen our feelings of connection to the outside world?

To find out, Ghosh built a series of five ‘haptic backpacks’ to be worn along with a VR headset. Inspired by HP’s own Omen VR Backpack, which makes it possible to create “untethered” VR experiences, each of these backpacks was augmented to deliver a different kind of physical nudge to users immersed in a virtual reality task. One backpack created the sensation of a shoulder tap at regular intervals to mark the passage of real world time, another buzzed at the shoulder, while a third buzzed the entire back. The fourth backpack created a “hugging” sensation and the final pack used small fans to blow air across the wearer’s neck.

Trials on colleagues in HP’s Immersive Experiences Lab quickly revealed that the hugging and blown air solutions didn’t give clear enough external signals. But the first three showed promise. Ghosh led efforts to test these other forms of haptic feedback on a larger group of participants as they undertook two different VR tasks.

“Perhaps our main finding was that people did notice the alerts they were getting and for the most part they were able to connect that with the real world, so it does seem possible to use your body’s surface area to create notifications about the real world,” says Ghosh.

The study also revealed a discrepancy between the intellectual calculations people make as they count buzzes or taps to measure time and their instinctual sense of how much time has passed. Many felt more inclined to believe their less reliable instincts over their more accurate counts, offering a useful window on the dominance of our instinctual sense of time in VR environments.

In addition, participants reported a strong inclination to believe that the physical sensations they were experiencing had a significance in the virtual world.Alex Thayer, Chief Experience Architect for the Immersive Experiences LabAlex Thayer, Chief Experience Architect for the Immersive Experiences Lab

“If we can get a better handle on all of these things, it could help make for a better VR experience itself as well as letting us send clearer signals from the outside,” notes Alex Thayer, Chief Experience Architect for the Immersive Experiences Lab. 

On the issue of external notifications, the study suggested multiple areas for further analysis, such as the best patterns to use for signaling and the degree to which priming participants with information about what to expect can impact outcomes.

After completing his initial research, Ghosh returned to Georgia Tech to finish his degree. The work on his thesis with adviser Gregory Abowd was inspired by the HP Labs study. On graduation, Ghosh was hired into HP Labs as a full time researcher in the Immersive Experiences Lab so he could continue his explorations.

“One of our next steps is to ask how we can apply what we’re learning in these studies to future iterations of VR interaction and design,” Ghosh says.

That will help HP’s Immersive Experiences Lab further its goal of helping people achieve “supernatural productivity” – productivity far beyond what’s currently possible.

“We see VR as one of the technologies most likely to both disrupt and enhance how professionals do their work in the next five or ten years,” adds Thayer. “Research like this helps us anticipate that moment by enriching our understanding of what it will take to have VR be a major part of our work lives.”

Addendum - The haptic backpack project was a collaborative effort with other members of the Immersive Experiences Lab, including Hiro Horii, Kevin Smathers, and Mithra Vankipuram.

Published: October 10, 2017

From left: HP Labs researchers Adrian Baldwin and Jonathan GriffinFrom left: HP Labs researchers Adrian Baldwin and Jonathan GriffinHP Connection Inspector, a new intelligent embedded security feature for enterprise printers developed at HP Labs, helps networked HP printers stay one step ahead of malware attacks by giving them advanced self-healing capabilities.

Announced at this month’s HP World Partner Forum in Chicago, HP Connection Inspector was developed specifically for enterprise printers, notes Adrian Baldwin, one of the Bristol, UK-based researchers behind the innovation.

“A lot of security technology that gets put into printers simply copies what is put into PCs,” he says. “HP Connection Inspector has been developed from the outset with the mechanics of how printers work – and the needs of printer users – in mind.”

Malicious actors are constantly looking for less-protected gateways into an enterprise’s larger IT network. To prevent networked printers becoming that conduit, the HP Security Lab team focused on developing a novel approach to network traffic monitoring designed to detect threats and enable immediate responses.

Where many malware detectors need to refer to libraries of known hostile programs or network addresses known to be associated with an attack, HP Connection Inspector focuses on detecting anomalous behaviors and then acts to secure the networked printer even before the malware is confirmed to be present.

It does this by keeping a continuous watch for moments when malware is attempting to make contact with its command and control server. In the process, HP Connection Inspector learns what “normal” network traffic looks like, meaning that it can detect suspicious outbound requests even when those requests aren’t sent to known “bad” web addresses. When it detects suspicious activity, the software can immediately go into a protected mode, stopping any further unfamiliar requests and sending a warning to IT administrators.

“One thing that’s hard about doing this is avoiding false alarms,” says Baldwin. “We do that by restricting what the printer is allowed to do if we get suspicious, but not stopping it completely until we know that we need to – that makes the solution much more reliable than usual.”

When HP Connection Inspector detects a specific, customer-determined level of malware-like behavior, the technology can also trigger a printer reboot. This initiates a self-healing procedure without IT needing to be involved. 

“Printers need to be on all the time,” adds project manager Jonathan Griffin. “By automatically rebooting the computer, printers aren’t idled while waiting for IT support; that also helps reduce down time, which is a high priority for all enterprise print users.”

In addition, these capabilities had to be developed as elegantly as possible, to ensure they would provide security without interfering with overall printing or networking performance.

“A lot of research went into creating this, but we’re quite pleased with how little space the final code actually takes up,” Baldwin notes.  

After developing the technology behind HP Connection Inspector, the HP Labs team worked extensively with colleagues from HP’s Office Printing Solutions group in Bangalore, India and Boise, Idaho to ready the solution for commercial use. It is now set to be included in all HP Enterprise LaserJet printers by the end of this year.

HP Connection Inspector is just the first of a number of printer-specific security analytics innovations the HP Labs team is developing to help detect and respond to malware attacks.