Q&A: Design Engineers Can Change the Game with Extended Reality

If the metaverse overlords of Silicon Valley got it right, the day will surely come when industries will scramble to assemble their “virtual world” strategies.

Engineers are already at the forefront of leveraging near-perfect versions of Extended Reality (XR) platforms to visualize their conceptual designs and apply new technologies to test manufacturing processes. But the mix of augmented reality (AR), virtual reality (VR), artificial intelligence (AI), and video game graphics constantly merging to create augmented worlds that intersect with the physical makes it difficult to keep up.

“As with any new industry, terms are coined,” said Dijam Panigrahi, COO and co-founder of GridRaster Inc.a cloud-based platform provider that partners with manufacturers in industrial environments to scale AR/VR solutions.

“Virtual reality is when you are completely immersed in the virtual world and your real world is blocked,” explains Panigrahi. “Augmented reality is about increasing a number of data points, such as work instructions. Pokemon GO is a fairly simple example of AR. Mixed reality is when there is an interaction between the virtual and physical world…Mixed reality and extended reality are nothing but umbrella terms for it all.”


RELATED

AR services deliver expertise, increase productivity and increase security

Augmented Reality Eyes Enterprise Adoption

Virtual Reality distorts your sense of time


As elusive as the terms seem and as costly as it is to develop and deploy the technology, companies like GridRaster are poised to solve the problems and ensure more of it comes online in the years to come.

Panigrahi can attest to its benefits, functionality and effective use in operational applications in the aerospace and defense industries. GridRaster is developing an AR toolset prototype for the United States Air Force to improve aircraft wiring maintenance on the USAF’s fleet of CV-22 Osprey aircraft. The USAF CV-22 nacelle wiring is responsible for approximately: 60% of total maintenance attempt. The AR tool allows administrators to troubleshoot, fix and train in the operational environment.

In the Q&A that follows below, Machine design Panigrahi asked about trends in immersive technology and the role XR plays in the design and production of industrial operations. The conversation has been abbreviated and edited for clarity.

Machine design: The relevance of extended reality in design and engineering, especially when we talk about photorealism and mixed reality simulations, is gaining traction. There seems to be an abundance of creators and adopters for both creation and work. What do you see happening in this market space now?

Dijam Panigrahi: One of the most important things about mixed reality — the new term for everything together is “metavers” — is the possibility it brings. We already had digital twins and CAD models from a visualization point of view, and we always had the PLM systems, which was part of the production setup. Now, with the advent of the cloud and the virtualization of GPUs (graphics processing units) and advances in headsets and sensor technology, we are in a position where we have a device that allows us to interact with the real world very seamlessly.

Digital twins are created – it’s like making a soft copy of your physical world. You take that soft copy of the physical world and apply all the software techniques to try out variations or ‘what-if’ analyses. You analyze the different data and do it iteratively to learn from the different scenarios, and then take those lessons and apply them in the real world to test certain things that you may have encountered along the way.

You can now take care of a number of issues and what-if scenarios in the design phase itself. You don’t have to wait for the operational and after-sales environment; you can simulate all this in the design phase itself. For businesses, especially aerospace, we’ve seen that if you traced it, seven out of ten problems would have been captured and addressed in the design phase. This has been huge for the entire ecosystem.

MD: In the design phase I want to go a little deeper into how you do that. But first, who does GridRaster work with and how does that relationship work?

DP: We work with two top contractors in the aerospace and defense industries. We have worked with the Department of Defense, the United States Air Force and multiple entities within the Air Force. On the simulation side, we are working on the aircraft maintenance of the USAF CV-22. On the telecom side, we work together with a number of large telcos and cable operators. What we build will be infrastructural important, so we have the end user plus the enablers, such as the cloud providers and the telecom players.

MD: Let’s get into technology. Can you talk about how a physical mock-up, say a CAD model of a part of a car, is brought to life through mixed reality in the head-mounted display?

DP: The CAD models are a visual representation, but when you talk about the digital twin, you also map out all the physical behavior. Suppose there is a note. If I run it, it will run a certain way. All those behavioral aspects are also part of the digital twin, which means that your object or your environment, based on your interaction, will behave as it would if you were doing those things in a physical world.

All digital twin content is complex and heavy. These days, if you try to put those things on a standalone headset like a HoloLens or Oculus Quest, you’re going through this painful cycle of optimizing things. On a headset there is only so much computing power available to run all the data.

But you can put the data needed for digital twins — which are integral to powering all these realistic, immersive experiences — in the cloud and run and stream it to different devices. Then, based on your interaction with the sensors that capture all interaction and input from the devices, you can simulate the environment and all interaction. You can see that visually on a Hololens or in a VR headset like Oculus Quest, depending on which experience you’re trying to enable. That is broadly how it is brought to life today.

MD: How do you achieve that precise overlay over the 3D model or digital twin? And how does this industrial design help?

DP: This overlay is done in both directions. You can place it on the headset. For example, Microsoft Hololens can track the environment and they can detect surfaces using computer vision to identify objects and surfaces. Based on where you want to fine-tune that, it can make this easier.

The challenge with the standard standalone headset is that it can only do it with a certain accuracy, or follow certain types of shapes. It needs an ideal condition to perform. By bringing it into the cloud for very precise alignment, you should be able to create a very fine network of this whole world that you see in 3D. You can then identify all individual objects and find out which object is important to you.

For example, if I have the whole car, and I try to put just the door on top of that car, I can isolate that door and make that fine mesh of the point cloud that holds all the information and will be able to see any structure in the physical world. identify. I can align the corresponding digital twin or CAD model I have because I know all the anchor points, and now I can align it perfectly. That’s what we do on our side.

This is why it is critical. For example, if you want to enable a use case such as automatic defect detection or identifying anomalies, you already have the digital twin. It captures the ideal state of how things should be in the physical world. Suppose there is a dent or defect in an airplane. If you’re able to perfectly overlap that digital CAD model or digital twin in the physical world, you can make a difference between both scenes.

That’s where the software engineering to analyze all of this comes into play. It is only possible because we are now able to create that softer copy of the physical world and make a difference between the ideal state and what we currently see.

One of the use cases that we are also working on is how to perform the nestle wiring harness installation of the USAF CV-22 aircraft. Wiring harnesses are a very crowded space. If you don’t have the millimeter accuracy to put those harnesses over the physical harnesses, then you can’t give the correct instruction for someone to follow. So the precision for many of these use cases is extremely important, and that’s what we’re trying to solve with our platform.

MD: What are some of the shortcomings and challenges you have now, and what are you working on in the future?

DP: From a technological point of view, there are many dependencies. One of the things we run into is that you might not have the CAD model, so we have to scan that whole environment and that takes time. Sometimes it affects the accuracy. But we know that in the future everything will always be designed in the 3D world, so you have the CAD data for everything. But today these are some of the challenges.

Another challenge relates to the headset rendering, in terms of obtaining the depth information and the color code information. We often depend on what the camera sees. As the precision of the camera and the realism of that camera improve, our performance improves.

Those are the most important things: the dependency on the headset and the available data or content. We have to bridge that and we do that with our technology.

MD: Development and access to XR technology has been quite expensive, both recreationally and commercially. Do you see the price and costs falling?

DP: Absolutely, I think that’s going to happen. Invariably, when you look at a technology that has been picked up, it makes sense in terms of value per price. There’s a reason we went after the aerospace, defense and automotive industries. In aerospace and defense, think of repairing an airplane and if I’m able to improve only 30%. Every hour a plane is grounded, you risk losing hundreds of thousands of dollars.

The price of the headset can be $5,000. Trying to set up this whole system might cost another $10,000. But the return you get for that $10K to $15K has increased tenfold. For those kinds of use cases these days, the price points don’t matter.

If you go into medical and educational applications where you look at people adopting en masse, the value per price would matter. But the good thing is that prices are now falling. When we started, we had the Oculus DK1. To get this entire rig up and running, you needed $3,000 worth of equipment. Now to get it going, you can get an Oculus Quest for $299 and you’re on your way.

You are already seeing a tenth of the cost and it will continue to fall. Most likely it will go the mobile way, with your telecom players subsidizing it and you getting and paying for the headset over a period of time, or as part of your monthly rental. It’s just a matter of time.

Leave a Comment

Your email address will not be published.