HP newsroom blog
Showing results for 
Search instead for 
Did you mean: 
Published: November 29, 2016


original (8).jpg

Andrew Bolwell is the Global Head of Technology Vision and HP Tech Ventures.  In this role he is responsible for driving HP’s long-term innovation and technology vision for HP, as well as for HP's venture activities, working across start-up and venture capital communities to identify, source, commercialize and invest in early-stage disruptive technologies.  Liaising with HP Labs, business groups, customers and partners, Andrew is defining new market segments, products and business models that will help shape HP’s future growth.

Today, we’re sharing Part 2 of a five-part series discussing HP’s future technology vision, and how key global forces known as Megatrends are being used to shape that vision and our future. Megatrends are global socio-economic, demographic and technological forces that will have a sustained and transformative impact on businesses, societies, economies, cultures and our personal lives in unimaginable ways in the years to come.

One of the trends that will have a significant impact on our cities, infrastructure, services and environment is the growing expansion of city populations, something HP calls Rapid Urbanization. The world is now more urban than rural, and that is expected to surpass 70% by 2050.  Who will make up these city populations, how they will change our economies and what impact they will have on our environment is something businesses, governments and technologists will be challenged with for years to come.

What opportunities and challenges does Rapid Urbanization pose to HP?
As the population continues to grow and more people move to cities, our cities will get larger, and we’ll have more of them. In fact, by 2030 we will have 41 “megacities with populations of over 10 million people each –more than the current population of Sweden. Meanwhile, the area of urbanized land could triple globally from 2000 to 2030. This is equivalent to adding an area bigger than Manhattan every single day.original (9).png

These megacities will require a new level of infrastructure design and architectural planning, making it easier for residents to work and live in a dense urban setting.

Smart cities will emerge powered by robust technology infrastructure: sensors, data platforms, analytics, cloud, etc.

Roads, public transit, and parking structures will start to serve multiple purposes, generating energy, and charging electric vehicles, in addition to their original function.

Space will be at a premium, forcing us to rethink how we design products to fit in smaller, shared work and living spaces.

And the world around us will become smarter—city sidewalks that suggest walking routes based on congestion, construction, pollution etc.  Even in our homes our everyday items will make city living easier by automatically ordering supplies from neighborhood consumer goods and food outlets when we are running low, delivered by drones and robots and only in quantities we have room to store.

So will Rapid Urbanization change consumer consumption and global economics?Historically, bigger cities have been correlated with major economic growth. With cities the size of small countries, megacities will become markets unto themselves. In the next ten years, urbanization will welcome an additional 1.8 billion consumers to the world economy. The majority of these new consumers will be in emerging markets, where annual consumption is forecast to reach $30 trillion in 2025.original (10).png

Urbanization is not only driving economic growth, it’s also changing how we buy and consume products and services. It will drive demand for new services that will make it easier to share resources and space in a crowded, hectic and increasingly busy environment, propelling the sharing economy and convenience-based services.

Services that will save us time— virtual assistants that will reschedule our meetings or send clients inventory updates, and home appliances that will reorder coffee, shampoo and milk when we are running low.

Ease the traffic and transportation hassles—ride shares, parking valets and self-driving car services. 

And make our busy lives more enjoyable—reservations at the hottest restaurants, last minute weekend getaway house shares, wearables that alert us to nearby retail offers. Even sommelier and florist bots who will make sure we always have the right bottle of wine or flowers for that special occasion.

The service possibilities are endless.

What kind of toll will this rapid growth of cities pose to our environment and how can HP help?

Rapid Urbanization is already taking a toll on the environment. If things remain the same, in 2030 mankind will need the resources of two planets to sustain its current lifestyle. Electricity consumption is predicted to grow by 13% alone in the next 4 years. Addressing resource waste can partially offset this increasing resource demand, but we will also need to employ new technologies that can optimize resource usage. This means that sustainability will become increasingly important for both consumers and businesses.

We need to get smarter about how we manage our resources and capitalize on opportunities, new resources and new ways of doing things.

There is already great advancement being made in alternate energy sources.  In Africa a people-powered football field (a.k.a. soccer in the U.S.) is generating energy from the player’s footsteps and converting it into the energy used to light the field.

It’s just the beginning to how technology can help enable new, improved ways of creating and minimizing the use of energy, food, water etc.

How does a company like HP stay ahead of this change, to innovate, adapt, reinvent and engineer experiences for the future?

Rapid Urbanization will transform cities into markets, create a new consuming class, change business models, and increase the importance of global sustainability. And while we can’t predict exactly what the future holds, we can look to these changes, along with the other major socio-economic, demographic and technological trends occurring across the globe, to help guide us as we chart our course. At HP, there is unrelenting excitement about our future in this ever-changing world.  We are on the cusp of new possibilities and innovations that will lead to products and services that will shape our future. This is what drives us to Keep Reinventing.

    HP Labs
Published: July 27, 2017

HP Labs intern Allison MooreHP Labs intern Allison Moore

Allison Moore is a rising senior at Homestead High School in Cupertino, California. She’s a competitive fencer and member of her school’s robotics team. She’s been surprised at how seriously high school interns are taken at HP Labs. “I expected that I’d just be told what to do and not really be involved in developing a study,” Moore says. “But we’re all working together and I have a lot of flexibility to follow my interests in terms of the contribution I’m making.”

HP: So what are you working on this summer?

I’m helping with a user study on self-expression and clothing in HP’s Immersive Experiences Lab. Right now we’re working on developing what we want to ask people. We’re going to have people bring in pictures of different outfits that they wear for different kinds of activities and then talk about items that they use to customize and personalize their appearance in those situations.

HP: Can you explain the thinking behind the study?

People say a lot through what they wear. Sometimes it’s visual, where you are saying it to everybody. Sometimes it’s more private. It can also seem like you are making a trivial decision in deciding what to wear, but it has a big impact on how people look at you and how you feel about yourself. When you wear an item that doesn’t make you feel comfortable, you really notice it and it can change how you behave. We’re interested in that, and in how we can make people feel more comfortable with who they are.

HP: What’s your role in the study?

I’m making props that we’re using to get people thinking about possible applications of personalization and customization using 2D and 3D printing. For example, I just designed some buttons that we’re going to 3D print. I might also be going in the room and asking people questions when we do the study itself.

HP: What are you hoping you’ll find?

I hope we find ways in which people can use printing to express themselves in different situations, even ones where they feel vulnerable. So that even if you are in an environment where you have to wear clothing that you don’t like, you can still express yourself in that environment and feel comfortable in it.

HP: Is interning at HP Labs changing your thinking about what you’d like to major in at college?

It’s definitely helping me figure out the general area I want to go into. And I’m seeing that it’s okay to pursue multiple options, like science and the liberal arts, at the same time. It’s also got me thinking more about what I want out of a career – how do I follow my passions and also make a difference, and what kind of work will I want to come in and do every day?

Published: July 18, 2017

HP Labs intern Michael LudwigHP Labs intern Michael Ludwig“I got really lucky and the project I’m doing here is basically applying my thesis work to 3D printing,” says HP Labs summer intern Michael Ludwig, who uses computer graphics to study the simulation of materials and their appearances and applies those insights to understanding how humans see complex materials.  Ludwig has almost completed his Ph.D. in computer science at the University of Minnesota, from which he also holds a BS in computer science. When not working, he likes to bike, train his dog and write his own computer graphics programs.

HP: Tell us more about the work you are doing at HP Labs.

I study how people see things and how we can model that computationally. When you are thinking about reproducing the appearance of things in 2D, it’s mostly about color and the texture of the paper you are printing on. But with 3D printing, you have to think about color in three dimensions and also surface curvature and geometry, and then the qualities of the different kinds of materials that you are printing to. So when you want to make something look like it does on your monitor, there are lots of ways in which the two might not match. I’m trying to come up with a quantifiable metric for measuring how much they match or not.

HP: What’s the value in doing that?

Right now, when it comes to printing things in 3D you will have errors or defects that may or may not be visible. But the way we measure that accuracy is mostly by eyeballing it and saying, “I think that’s better (or not) than we have done it before.” What I’m doing is trying to put some numbers to that process that line up with the way people see things. Then we can potentially use that as our guide for how “well” something is printed.  

HP: How are you going about creating that metric?

I’m starting with a user study that will collect data about how people see these types of defects in 3D printed objects. Then I’m going to apply a hypothesis from my thesis to see if it fits end models of the data that we collected.

HP: Do you have any results yet?

It’s a bit early for that. I’m still learning about all potential problems that come up in 3D printing. After that, I’ll establish what we’ll ask our human subjects to do and how we’ll accurately measure what they’re seeing, and then figure out how we take that data to establish the metric I’m looking to create.

HP: Will this feed back into your Ph.D. research?

Yes. Back in Minnesota, I’m working on applying the same model to a broader psycho-physical question, looking at variations in appearances across different areas and asking whether it’s possible to create a framework for a general appearance metric. So this work on 3D appearance metrics gives me another instance that will help me figure that out. But even if it only works for 3D printing, it would be a very useful tool for people in that specific field to have.

HP: What other fields could appearance metrics be useful for?

 Automotive technology is a big one, where understanding appearance impacts computer vision for assisted or automated driving technologies and also helps give people a realistic idea of how different paints and finishes would change the look of a car. But really it has use in any industrial design or quality control process where designers work with manufacturers to create a specific visual impact.

HP: How has working at HP Labs changed your perspective on the challenge you are addressing?

It’s been really valuable to see a design-to-manufacture process up close. There are also some very advanced tools here – like one that scans materials and creates a virtual representation of them – that I can see would be able to use metrics like the one I’m trying to come up with.

HP: What have you liked so far about working at HP Labs?

I’ve only had one internship before, which was at Google, and I’ve enjoyed the fact that HP Labs feels much more “scientific.” It’s been really cool to come in to work and have a fully-equipped chemistry lab ten feet from my desk that I can potentially interact with. It’s also been really validating to share my ideas with people here and have them respond so positively.

Published: July 13, 2017

Jaime Machado Neto is a firmware engineer with HP’s 3D Printing business unit in Barcelona Spain and a leading contributor to MatCap3D codebase.  He is holding a stochastic lattice structure he designed and processed with MatCap3D and printed with HP’s JetFusion printer.Jaime Machado Neto is a firmware engineer with HP’s 3D Printing business unit in Barcelona Spain and a leading contributor to MatCap3D codebase. He is holding a stochastic lattice structure he designed and processed with MatCap3D and printed with HP’s JetFusion printer.3D printing could potentially transform the global manufacturing landscape. But for that to happen, the 3D print community must first solve a major data pipeline challenge: speeding the processing of complex designs into machine instructions for 3D printers.  

New 3D printing methods, such as HP’s Multi Jet Fusion technology, let designers work with complex internal structures and meta-materials that are impossible to fabricate with traditional methods, notes Jun Zeng, a senior researcher in HP Labs’ fabrication technology group.  

“But it takes a lot of information to describe not only the shape but also the interior composition of a complex part,” he explains. “Additionally, the printer needs to compute auxiliary data tailored to the printing physics to ensure the physical parts that are printed match the original design.”

New research conducted by Zeng and HP Labs colleagues points to a promising approach for managing these very complex files, work now manifested in a tool kit of experimental algorithms that is helping HP’s 3D Print business group ready the future generation of HP 3D printers. 


Trillions of voxels

Complex objects can be presented by collection of voxels, or volumetric pixels. Each voxel can record the intended properties of the object at that specific point, such as variations in color, elasticity, strength, and even conductivity of the printed material, adding to the file’s size.

“Using voxels as data containers is not only intuitive but also very flexible,” notes Zeng. “But it also means that we have a lot of voxels that need to be dealt with.”

A colorful dragon designed with complex internal lattice structures shared by Zeng, for example, is just a few centimeters across when printed but described by a file structure with an addressability of 1 billion voxels. 

HP 3D printers already have fabrication chambers larger than a cubic foot that can fabricate hundreds of parts in the same build, and at a finest resolution up to 1,200 dots per inch, each of which can be represented by a single voxel.

“Once designers start to exploit the full voxel addressability afforded by these types of printers,” Zeng suggests, “we will be working with files that need to address tens of billions, and even a trillion, voxels.”

Files of this size present two challenges in particular. Firstly, to be moved, stored and otherwise manipulated effectively, they need to be reduced in size. But at the same time, it must be possible to reach each voxel and its neighbors quickly in order to generate machine instructions fast enough to feed them to the printer without causing a bottleneck in the printing process.

Intended variations in an object’s properties – where it gradually gets softer, for example, or where it grows in flexibility – also impact the instructions that must be sent to the 3D print head for each specific voxel, further complicating the processing that must occur for the design to be printed as required.

“The big research challenge here comes down to how you structure the voxel data to enable both efficient compression and fast processing, which is also influenced by the computing architecture that you choose to do the voxel processing,” says Zeng.


New approaches, and a new toolkit

Zeng and colleagues at HP Labs believe one viable option lies in deploying new kinds of parallel processing using both basic computer chips (CPUs) and GPUs, computer chips initially developed for graphics processing. While CPUs are typically optimized for specific tasks to avoid latency, GPUs are optimized to take on multiple similar but separate tasks at once.

The HP Labs team have been working with academic and industry partners to explore using CPUs and GPUs as co-processors, including collaborating with chip maker NVIDIA.


Jun Zeng (right) and Dr. Rama Hoetzlein of NVIDIA at this year’s GPU Technology Conference.Jun Zeng (right) and Dr. Rama Hoetzlein of NVIDIA at this year’s GPU Technology Conference.“Many of the problems that need to be operated on at the voxel level can be worked on in parallel, so the GPU data paradigm fits well,” Zeng says.

One result of this research is a set of experimental algorithms for processing 3D data structures that, for example, exploit parallelism to process voxels in an especially efficient sequence and deploy new mechanisms for describing how very large voxel structures are organized. Through a collaboration with HP’s 3D Printing business unit and HP Brazil’s research and development group, many of these algorithms are now available as a research “tool kit” to the HP developer community.

The tool kit, dubbed “Material Capturer for 3D Printing” or MatCap3D, is constantly being updated and refined following an internal open source model, and HP developers are themselves invited to contribute new code.

“As we look at the future cyber-physical world or what is being referred to as Industry 4.0, HP’s 3D Multi jet Fusion technology shows us that the Art to Part pipeline will result in the processing of Trillions of Voxels to produce structured,  engineered materials. The computing paradigm in this instance will require new computing architectures and (distributed) computing topologies,” says HP’s Chief Engineer and Senior Fellow Chandrakant Patel.

Some of the algorithms developed in the project may find their way into future HP print systems, but their principal value lies in helping explore promising avenues for 3D print file processing, observes Zeng.

“With the progress that we’ve already made, we’re quite encouraged that it will be possible to use this method to process very complex object designs as fast as we need them to be processed,” he says.

Published: July 11, 2017

HP Labs intern Lydia MoogHP Labs intern Lydia MoogSan Francisco native Lydia Moog is a maker and engineer of physical objects, with experience in jewelry making, metalwork, and even making her own shoes. One semester shy of completing her undergraduate degree in mechanical engineering at Brown University, Moog is spending this summer in HP’s Immersive Experiences Lab exploring the intersection of material objects and human personality.

HP: What project are you working on this summer?

I’m part of a team that’s trying to better understand how people express themselves. We’re asking if there are ways to customize technologies so that people can better show off their individuality. So many products today stand at a distance from people’s personalities and we’re hoping to get away from that uniformity and allow people to express themselves through their technology choices. 

HP: How are you doing that?

We’re conducting a user study where we ask users to share different outfits that they wear in different settings, like at home or at work. The idea is to see how they curate their appearance depending on their situation so we can get a sense of where they feel able to express themselves or where they feel restricted. Then we want to see if there’s an object or technology that might allow them to further express themselves within that context. It could be something that is hidden and that only they know about, but that makes them feel more comfortable. Or it could be something more overt like a color, or texture, or photograph that is more open to the public.

HP: What’s your own role in the research?

I’m making props to show the users that are related to HP’s 2D and 3D printing technologies and are examples of ways in which people might express themselves further. These are objects that in the future could be both mass produced and also individually personalized to reflect the person wearing it. I am also helping conduct the study, which will be my first time doing that!

HP: Do you have any results yet?

We’re currently designing the study plan and questions, and then we’ll run through everything with real people in a couple of weeks, so it’s too early to say at the moment.  

HP: Is this kind of research new for you or something you’ve done before?

It’s new to me. I came to HP wanting to make physical objects that helped bridge the boundary between technology and the personal, so this is very much along those lines. But I’ve never done a user study like this or the kind of broad prototyping that we’re doing here.

HP: Is working at HP Labs changing your plans for after college?

I wasn’t thinking about being a researcher before and I’m definitely thinking about it now. I’m also more curious about 3D printing and where it can go in the future, especially in terms of how we might develop more sustainable and environmentally-friendly kinds of technologies. And more generally I’m curious about how we can improve the relationship between people and technology.  

HP: Is HP Labs what you expected it to be?

When I walked in the first day it was not at all what I expected. It’s very community-based and people work together collaboratively to solve problems. They’re open even to interns suggesting ideas and bringing the projects forward. It’s a very warm, welcoming atmosphere – you get the sense that people are interested in helping you advance any idea that you have.

HP: How did you hear about internship opportunities at HP Labs?

While at Brown University I had the opportunity to take several classes in metalsmithing and jewelry at the Rhode Island School of Design. One of my classmates there was Alex Ju, who was an intern here last summer and now works at HP Labs as a researcher. Alex spoke highly of HP and encouraged me to apply for an internship in her lab.

Published: June 29, 2017

From left: Arjun Patel, member of the HP Print Software Platform team, and Dr. Qian Lin, Distinguished Technologist.From left: Arjun Patel, member of the HP Print Software Platform team, and Dr. Qian Lin, Distinguished Technologist.

HP has made a powerful portfolio of computer vision algorithms available to companies looking to turbocharge their ability to make sense of visual data.

 Face detection: Identify and locate human faces in digital imagery.Face detection: Identify and locate human faces in digital imagery.The HP Pixel Intelligence portfolio unlocks a range of capabilities - from accurately locating, analyzing, and grouping faces and objects to automating image layout and cropping.  HP has already used Pixel Intelligence in some of its own leading imaging products. Now other companies can add these same capabilities to their own products and services.

The algorithms are the fruit of a long running HP Labs research program focusing on computer vision, says Dr. Qian Lin, Distinguished Technologist for Computer Vision and Deep Learning Research in HP’s Emerging Compute Lab.

“We’ve been incrementally improving computer vision for a long time,” Dr. Lin notes. “But in the last few years we’ve been applying deep learning, which combines machine learning with artificial neural networks, to the problem and that has allowed us to make enormous strides in the quality of our results.”

The algorithms in the Pixel Intelligence portfolio can find faces within an image or find the same face in multiple images with great accuracy. They can also recognize specific kinds of objects in a set of pictures (all images that feature a specific logo), detect facial attributes such as whether people are smiling or their eyes are open or shut, and sift through hundreds of images to make a collage of the best photos, cropped to fit within a page. Moreover, they work in real time, opening up new avenues for improvement in devices like smart home assistants that are constantly aware of their surroundings.

 Face grouping: Accurately recognize and group individuals.Face grouping: Accurately recognize and group individuals.Some of these capabilities are unique to HP. For example, by installing HP’s Pixel Intelligence software, a print service provider could analyze a large collection of images such as wedding photos in a batch operation, automatically group faces of the same person together with high accuracy, and select the photos with the best facial image quality for inclusion in a wedding photobook. Other capabilities are offered by competing companies, but only through their own cloud servers.

“With Pixel Intelligence, you can host the processing software within your own data center, saving you time and bandwidth while keeping your data secure and private,” notes Dr. Lin.

The Pixel Intelligence portfolio is the result of close collaboration with HP’s Print Software Platform team. It was launched on HP’s developer site last fall and was presented to the Digital Solutions Cooperative (DSCOOP) of HP technology users in Phoenix this spring, with a repeat presentation at DSCOOP in Lyon in early June. It’s already drawing interest from photo services companies interested in increasing personalized print workflows and automating the creation of high quality print products.

“A lot of these companies don’t have the resources for artificial intelligence research, so they are very interested in licensing our technology,” Dr. Lin says. “We’re also hearing from companies in a wide range of industries - retail, health, security, and finance, for example – that want to know more about these capabilities.” 

Computer vision remains a continuing area of interest in HP Labs. Dr. Lin and her colleagues keep improving their existing algorithms and will add any new algorithms to the Pixel Intelligence portfolio.

They are also tackling new challenges, such as improving techniques for identifying 3D objects and applying advanced computer vision to ambient computing technologies that anticipate human needs and proactively address them.

“We are already placing image-gathering sensors in more devices and more places than ever before,” observes Dr. Lin. “So we see a lot of potential for this technology in future smart home, smart office, and mobile applications.”