HP newsroom blog
cancel
Showing results for 
Search instead for 
Did you mean: 
Published: January 12, 2017


The Project Jetty team from left to right:: Hiroshi Horii, Alex Thayer, Ji Won Jun,  Jishang Wei and  Kevin  SmathersThe Project Jetty team from left to right:: Hiroshi Horii, Alex Thayer, Ji Won Jun, Jishang Wei and Kevin Smathers

 A design project that connects family members via a 3D display indicating when relatives are ‘home’ – and what weather they are experiencing – is helping HP Labs better understand how technology can bring people together.

The concept, called Project Jetty, is elegantly simple: place a 3D-printed, realistic representation of your home in the home of an adult relative and keep a representation of their home in yours. Each printed house glows when its owner is home and sits in a photo frame illuminated by a tablet device, enabling the display of real time weather data.

Thanks in large part to that simplicity, the devices can have a powerful impact says Alex Thayer, PhD, project director and senior manager in HP’s Immersive Experiences Lab.

“You might think you could foster even stronger connections through something like a live video feed, but while pictures are highly emotional, their power can also inhibit people from wanting to initiate contact,” Thayer says. “In this project we’ve found that knowing whether someone is home, or what the weather is like at the relative’s house, is actually a great point of entry into a conversation, which is one of the things we were hoping to encourage.”

The idea for Project Jetty sprang from an HP Labs design workshop where Thayer’s colleague, Ji Won Jun asked, “How can we help people feel connected without actually being connected?” In response, Thayer recalled a comment from his young daughter: “I wish I could be at Grandma’s house even when I’m not there.”

That inspired team member Hiroshi Horii to create a mocked-up prototype on the spot, featuring a small, 3D house with real-time weather information projected onto it. They hit on the name Project Jetty because a jetty is anchored in one place (the land) but extends out into another (a lake or ocean), and acts as a launching or landing point for travel between the two.

The idea was promising enough for the lab to quickly launch a “design probe” – a working instantiation of the concept that could be tested in the field.The Project Jetty deviceThe Project Jetty device“In an eight week sprint our small team of engineers moved from brainstorming the idea to having the devices in use by five pairs of families,” Thayer recalls. Each pair lived within driving distance of each other but had expressed a desire to be in contact more often. They used the devices for just over a week and noted how the connection changed their behavior.

All the family pairings reported having more conversations via phone or text with each other than before, says Thayer. They also felt more connected and even used the device to see when family members had left their houses to come over to visit. One aging user noted that seeing her adult child’s house lit up helped remind her she was due to babysit, keeping her “mind organized” and making her feel better able to help care for her grandchildren.

“Everyone wanted to keep their devices at the end of the field test,” Thayer adds. “That gave us a pretty clear sense that the experience was one people really valued.”

Lab researchers aren’t necessarily looking to develop a new HP product as a result of the experiment. Instead, says Thayer, their intention was to extend their understanding of how technology can help us live better and feel more resilient in our lives.

In particular, the project has helped elucidate how technology can help people form more successful emotional connections. Using a glowing house to signal presence, for example, turned out to have more evocative power than something that was more “high tech” but also more abstract.

“We can use those insights in a wide variety of future projects,” Thayer notes.

    HP Labs
Published: August 14, 2017

HP Labs intern Swetha RevanurHP Labs intern Swetha Revanur

We first met with Swetha Revanur last summer, when she was a recent high school graduate heading for Stanford University and interning in HP’s Emerging Compute Lab on a project that used sensor data to create simulations of how people move around in different living spaces. This year, Revanur is back in the same lab but working on a new challenge. We caught up with her to see how her academic interests have developed over the last twelve months and to learn about what she’s been working on this time around.

HP: First of all, how was your freshman year at Stanford?

I had an amazing freshman year! I’ve met some of the most brilliant people, the classes were just the right amount of challenging, and I joined an acapella group on campus. In December, I also traveled out to Sweden to speak at the 2016 Nobel Prize Ceremonies and meet the laureates. I’m excited to start my sophomore year in September!

HP: Are you still planning to major in computer science?

Yes, that hasn’t changed! When I started at Stanford, I was interested in biocomputation, but my interests have since shifted to artificial intelligence.

HP: What prompted the change?

The decision was actually driven largely by my work at HP Labs last summer where I had a lot of exposure to the algorithmic side of computer science. I think that if I can understand these algorithms and optimize them, I can have a much larger impact in whatever sector I choose to work in. At the end of the day, machine learning can always be applied to health, and it has a huge scope. 

HP: So what are you working on this year?

I’m with the same team in the Emerging Compute Lab, but instead of looking at sensor analytics, I’ve shifted my focus to the intersection of deep learning and robotics. I’m using techniques in reinforcement learning, which lets us train software agents to find the optimal actions to take in specific environments. I’ve developed a hybrid approach that maintains the same performance as state-of-the-art reinforcement learning algorithms, while improving data and cost efficiency.

HP: How’s it going?

Reinforcement learning is a new area of study for me, and so it’s been a fruitful process of self-teaching. Initially, I was wrangling with pages of linear algebra to understand how existing methods work. Once I got my bearings, I was able to point out gaps and come up with optimizations, and now I’ve implemented the algorithm in TensorFlow.

HP: How will you test the new algorithm?

The new hybrid algorithm will be tested in simulation. I’ll start with simple tests with basic software agents. For example, I recently ran a test where a pendulum was trained to stay upright. Gradually, we’ll work up to full humanoid simulations.

HP: Why is HP interested in this work?

A lot of folks in HP Labs are working in a fundamental robotics research space, on projects like mapping, localization, and navigation. My hybrid approach helps cut time and cost requirements in that space. In general, robotics dovetails really well into the social, business, and home application layers that HP is a major player in.

I was invited to speak at the HP Labs global all-employee meeting with our CTO, Shane Wall. The implications of better reinforcement learning are broad, the interest is there, and I’m excited to see where it takes us.

Published: August 09, 2017

HP Labs intern David HoHP Labs intern David Ho

David Ho is about to enter his fifth year in Purdue University’s Ph.D. program in electrical and computer engineering where he specializes in image processing and computer vision research. Ho moved to the US from Gwangju, Korea during high school, and then attended the University of Illinois at Urbana-Champaign to study for both undergraduate and Masters degrees in electrical and computer engineering. This summer, Ho has been working on a collaboration between HP’s Print Software Platform organization and Emerging Compute Lab, called Pixel Intelligence, applying his expertise in image segmentation to the challenge of picking out people in any specific image.

HP: Can you tell us more about your internship project?

I’ve been using deep learning to improve what we call person segmentation, which is where a computer is able to separate the image of a person from any background. Humans can distinguish between different kinds of images very easily. But computers just see images as an array of pixel values. So we need to find ways to make computers “understand” images of people as people.

HP: How have you been doing that?

I’ve been taking several existing data sets of images where we have already established the “ground truth” of the images and using those data sets to teach a computer program what a person looks like. Once it is trained, I input new images and see how well the program can pick people out of them. The idea is to reduce the number of errors we get in doing that, and to be able to do it faster.

HP: How has it been going?

We’ve had some good results. One thing we’ve been able to do is get this running on a webcam camera, so that it can segment out people in every frame it records.

HP: What’s the challenge in doing that?

One is getting it to work for a relatively crude camera. Another, which we’re still working on, is reducing the processing required to do the segmentation. So far we’ve been running it on a processing unit designed for heavy computation. But we’d like to be able to run it on a smaller device.

HP: Will this work feature in your Ph.D. thesis?

Not directly. In my Ph.D., I’m also looking at applying deep learning to image processing, but I’m looking at understanding microscope images and segmenting out different biological structures. So the application is different but the main idea is the same: helping computers to make sense of interesting images.

HP: Is this your first time interning at HP Labs?

Yes, and it’s my first internship in an industrial lab.

HP: What has struck you as different about working in an industrial lab setting?

I’ve been impressed how industrial labs value creating software that anyone can use. My segmentation solution was pretty good, for example, but required a lot of processing power. So my mentor, Dr. Qian Lin, has pushed me to make it smaller so it’s of more value to more people.

Published: July 27, 2017

HP Labs intern Allison MooreHP Labs intern Allison Moore

Allison Moore is a rising senior at Homestead High School in Cupertino, California. She’s a competitive fencer and member of her school’s robotics team. She’s been surprised at how seriously high school interns are taken at HP Labs. “I expected that I’d just be told what to do and not really be involved in developing a study,” Moore says. “But we’re all working together and I have a lot of flexibility to follow my interests in terms of the contribution I’m making.”

HP: So what are you working on this summer?

I’m helping with a user study on self-expression and clothing in HP’s Immersive Experiences Lab. Right now we’re working on developing what we want to ask people. We’re going to have people bring in pictures of different outfits that they wear for different kinds of activities and then talk about items that they use to customize and personalize their appearance in those situations.

HP: Can you explain the thinking behind the study?

People say a lot through what they wear. Sometimes it’s visual, where you are saying it to everybody. Sometimes it’s more private. It can also seem like you are making a trivial decision in deciding what to wear, but it has a big impact on how people look at you and how you feel about yourself. When you wear an item that doesn’t make you feel comfortable, you really notice it and it can change how you behave. We’re interested in that, and in how we can make people feel more comfortable with who they are.

HP: What’s your role in the study?

I’m making props that we’re using to get people thinking about possible applications of personalization and customization using 2D and 3D printing. For example, I just designed some buttons that we’re going to 3D print. I might also be going in the room and asking people questions when we do the study itself.

HP: What are you hoping you’ll find?

I hope we find ways in which people can use printing to express themselves in different situations, even ones where they feel vulnerable. So that even if you are in an environment where you have to wear clothing that you don’t like, you can still express yourself in that environment and feel comfortable in it.

HP: Is interning at HP Labs changing your thinking about what you’d like to major in at college?

It’s definitely helping me figure out the general area I want to go into. And I’m seeing that it’s okay to pursue multiple options, like science and the liberal arts, at the same time. It’s also got me thinking more about what I want out of a career – how do I follow my passions and also make a difference, and what kind of work will I want to come in and do every day?

Published: July 18, 2017

HP Labs intern Michael LudwigHP Labs intern Michael Ludwig“I got really lucky and the project I’m doing here is basically applying my thesis work to 3D printing,” says HP Labs summer intern Michael Ludwig, who uses computer graphics to study the simulation of materials and their appearances and applies those insights to understanding how humans see complex materials.  Ludwig has almost completed his Ph.D. in computer science at the University of Minnesota, from which he also holds a BS in computer science. When not working, he likes to bike, train his dog and write his own computer graphics programs.

HP: Tell us more about the work you are doing at HP Labs.

I study how people see things and how we can model that computationally. When you are thinking about reproducing the appearance of things in 2D, it’s mostly about color and the texture of the paper you are printing on. But with 3D printing, you have to think about color in three dimensions and also surface curvature and geometry, and then the qualities of the different kinds of materials that you are printing to. So when you want to make something look like it does on your monitor, there are lots of ways in which the two might not match. I’m trying to come up with a quantifiable metric for measuring how much they match or not.

HP: What’s the value in doing that?

Right now, when it comes to printing things in 3D you will have errors or defects that may or may not be visible. But the way we measure that accuracy is mostly by eyeballing it and saying, “I think that’s better (or not) than we have done it before.” What I’m doing is trying to put some numbers to that process that line up with the way people see things. Then we can potentially use that as our guide for how “well” something is printed.  

HP: How are you going about creating that metric?

I’m starting with a user study that will collect data about how people see these types of defects in 3D printed objects. Then I’m going to apply a hypothesis from my thesis to see if it fits end models of the data that we collected.

HP: Do you have any results yet?

It’s a bit early for that. I’m still learning about all potential problems that come up in 3D printing. After that, I’ll establish what we’ll ask our human subjects to do and how we’ll accurately measure what they’re seeing, and then figure out how we take that data to establish the metric I’m looking to create.

HP: Will this feed back into your Ph.D. research?

Yes. Back in Minnesota, I’m working on applying the same model to a broader psycho-physical question, looking at variations in appearances across different areas and asking whether it’s possible to create a framework for a general appearance metric. So this work on 3D appearance metrics gives me another instance that will help me figure that out. But even if it only works for 3D printing, it would be a very useful tool for people in that specific field to have.

HP: What other fields could appearance metrics be useful for?

 Automotive technology is a big one, where understanding appearance impacts computer vision for assisted or automated driving technologies and also helps give people a realistic idea of how different paints and finishes would change the look of a car. But really it has use in any industrial design or quality control process where designers work with manufacturers to create a specific visual impact.

HP: How has working at HP Labs changed your perspective on the challenge you are addressing?

It’s been really valuable to see a design-to-manufacture process up close. There are also some very advanced tools here – like one that scans materials and creates a virtual representation of them – that I can see would be able to use metrics like the one I’m trying to come up with.

HP: What have you liked so far about working at HP Labs?

I’ve only had one internship before, which was at Google, and I’ve enjoyed the fact that HP Labs feels much more “scientific.” It’s been really cool to come in to work and have a fully-equipped chemistry lab ten feet from my desk that I can potentially interact with. It’s also been really validating to share my ideas with people here and have them respond so positively.

Published: July 13, 2017

Jaime Machado Neto is a firmware engineer with HP’s 3D Printing business unit in Barcelona Spain and a leading contributor to MatCap3D codebase.  He is holding a stochastic lattice structure he designed and processed with MatCap3D and printed with HP’s JetFusion printer.Jaime Machado Neto is a firmware engineer with HP’s 3D Printing business unit in Barcelona Spain and a leading contributor to MatCap3D codebase. He is holding a stochastic lattice structure he designed and processed with MatCap3D and printed with HP’s JetFusion printer.3D printing could potentially transform the global manufacturing landscape. But for that to happen, the 3D print community must first solve a major data pipeline challenge: speeding the processing of complex designs into machine instructions for 3D printers.  

New 3D printing methods, such as HP’s Multi Jet Fusion technology, let designers work with complex internal structures and meta-materials that are impossible to fabricate with traditional methods, notes Jun Zeng, a senior researcher in HP Labs’ fabrication technology group.  

“But it takes a lot of information to describe not only the shape but also the interior composition of a complex part,” he explains. “Additionally, the printer needs to compute auxiliary data tailored to the printing physics to ensure the physical parts that are printed match the original design.”

New research conducted by Zeng and HP Labs colleagues points to a promising approach for managing these very complex files, work now manifested in a tool kit of experimental algorithms that is helping HP’s 3D Print business group ready the future generation of HP 3D printers. 

 

Trillions of voxels

Complex objects can be presented by collection of voxels, or volumetric pixels. Each voxel can record the intended properties of the object at that specific point, such as variations in color, elasticity, strength, and even conductivity of the printed material, adding to the file’s size.

“Using voxels as data containers is not only intuitive but also very flexible,” notes Zeng. “But it also means that we have a lot of voxels that need to be dealt with.”

A colorful dragon designed with complex internal lattice structures shared by Zeng, for example, is just a few centimeters across when printed but described by a file structure with an addressability of 1 billion voxels. 

HP 3D printers already have fabrication chambers larger than a cubic foot that can fabricate hundreds of parts in the same build, and at a finest resolution up to 1,200 dots per inch, each of which can be represented by a single voxel.

“Once designers start to exploit the full voxel addressability afforded by these types of printers,” Zeng suggests, “we will be working with files that need to address tens of billions, and even a trillion, voxels.”

Files of this size present two challenges in particular. Firstly, to be moved, stored and otherwise manipulated effectively, they need to be reduced in size. But at the same time, it must be possible to reach each voxel and its neighbors quickly in order to generate machine instructions fast enough to feed them to the printer without causing a bottleneck in the printing process.

Intended variations in an object’s properties – where it gradually gets softer, for example, or where it grows in flexibility – also impact the instructions that must be sent to the 3D print head for each specific voxel, further complicating the processing that must occur for the design to be printed as required.

“The big research challenge here comes down to how you structure the voxel data to enable both efficient compression and fast processing, which is also influenced by the computing architecture that you choose to do the voxel processing,” says Zeng.

 

New approaches, and a new toolkit

Zeng and colleagues at HP Labs believe one viable option lies in deploying new kinds of parallel processing using both basic computer chips (CPUs) and GPUs, computer chips initially developed for graphics processing. While CPUs are typically optimized for specific tasks to avoid latency, GPUs are optimized to take on multiple similar but separate tasks at once.

The HP Labs team have been working with academic and industry partners to explore using CPUs and GPUs as co-processors, including collaborating with chip maker NVIDIA.

 

Jun Zeng (right) and Dr. Rama Hoetzlein of NVIDIA at this year’s GPU Technology Conference.Jun Zeng (right) and Dr. Rama Hoetzlein of NVIDIA at this year’s GPU Technology Conference.“Many of the problems that need to be operated on at the voxel level can be worked on in parallel, so the GPU data paradigm fits well,” Zeng says.

One result of this research is a set of experimental algorithms for processing 3D data structures that, for example, exploit parallelism to process voxels in an especially efficient sequence and deploy new mechanisms for describing how very large voxel structures are organized. Through a collaboration with HP’s 3D Printing business unit and HP Brazil’s research and development group, many of these algorithms are now available as a research “tool kit” to the HP developer community.

The tool kit, dubbed “Material Capturer for 3D Printing” or MatCap3D, is constantly being updated and refined following an internal open source model, and HP developers are themselves invited to contribute new code.

“As we look at the future cyber-physical world or what is being referred to as Industry 4.0, HP’s 3D Multi jet Fusion technology shows us that the Art to Part pipeline will result in the processing of Trillions of Voxels to produce structured,  engineered materials. The computing paradigm in this instance will require new computing architectures and (distributed) computing topologies,” says HP’s Chief Engineer and Senior Fellow Chandrakant Patel.

Some of the algorithms developed in the project may find their way into future HP print systems, but their principal value lies in helping explore promising avenues for 3D print file processing, observes Zeng.

“With the progress that we’ve already made, we’re quite encouraged that it will be possible to use this method to process very complex object designs as fast as we need them to be processed,” he says.