Talking Shop with: Dr. Nader Jalili — The University of Alabama College of Engineering

Currently, Dr. Nader Jalili is the mechanical engineering department head at The University of Alabama (UA) College of Engineering and the director of the Alabama Initiative on Manufacturing Development and Education (Alabama IMaDE®). He is also the current chair of the American Society of Mechanical Engineers (ASME) Mechanical Engineering Department Heads and Chairs (MEDHC) and Chair of Southeast Mechanical Engineering Department Heads (SMEDH). Jalili is an innovative leader and researcher, known for bringing the resources of engineering education and research to undergraduate and graduate students, industry partners and community outreach programs. On March 1, 2023, Dr. Jalili will begin a new chapter in his career as the Dean of the Lyle School of Engineering at Southern Methodist University.

Dr. Jalili believes that to compete on the global manufacturing stage, the United States must place an emphasis on advanced manufacturing and create a stratified training curriculum that will allow workers to start at the entry level and eventually achieve mastery-level certifications. Industry 4.0 is already underway and the best way to bring jobs back to the U.S. is to introduce highly complex factories that rely on technically skilled workers with experience in data, software, robotics and 3D printing. There are three main strategic objectives when advancing US Manufacturing: i) Develop and Transition New Manufacturing Technologies; ii) Educate, Train and Connect the Manufacturing Workforce; and iii) Expand the Capabilities of the Domestic Manufacturing Supply Chain

Industrial Machinery Digest connected with Dr. Jalili to hear his thoughts on advanced manufacturing, robotic automation, and the future of manufacturing processes.

Photography courtesy of Matthew Wood, The University of Alabama.

TB (Trey Bell): Good afternoon Dr. Jalili. It is nice to connect with you, and have the chance to learn about your exciting research. Tell us a little bit about the scope of your operations and advanced manufacturing research.

DJ (Dr. Jalili): Thank you Trey for the opportunity to speak with you today. Manufacturing is changing and everyone has her or his own definition of advanced manufacturing. If you tried to search for ‘advanced manufacturing’ back in 2000’s or so, advanced manufacturing would have been referred to as bio-manufacturing, that is, adopting biological processes and biological systems for use in manufacturing. Looking back, I had a colleague who was interested in bacteria that would eat metals instead of any of today’s additive or subtractive manufacturing techniques! If you could properly time and control the nutritions for these special bacteria, you could influence their eating pattern thus subtracting materials from the part in a controlled manner.

Years have gone by since then, and in my personal opinion as well as the evidence from the National Science Foundation (NSF) and other funding agencies and organizations, the future of manufacturing is based on collaborations. The future of manufacturing is not about replacing workers and personnel as many are thinking that manufacturing is going to use robots instead of them. However, if we look at this challenge as an opportunity we would be able to realize that more professionally-trained personnel and workers are needed. That is, we need to bring humans into the loop, into the state of robot operations or to an awareness of how the robots operate. In this totally distributive landscape, we are not trying to ‘eliminate’ workers, instead we are aiming to ‘elevate’ them; that is, educate them with the tools and technology they will need to be able to operate automated machineries and robots. Such a paradigm is aligned with what calls for ‘Future of Work and Workers’. NSF announced to the public the 10 big ideas that would change the future, and one of them is the Future of Work and the Future Workers. The future worker in this context would be a person next to an AI-enabled and commended robot and working and collaborating alongside the robot.

If you bring into perspective what is called Industry 4.0, you actually bring cyber manufacturing into the workplace and try to relate some of the manufacturing processes with non-physical machines. So imagine that you want to work on a manufacturing process that is not here, so you would go to its virtual version and start to model it and predict its output.

Future of Work at the Human-Technology Frontier at www.nsf.gov

TB: A visualization?

DJ: A visualization, exactly, and these days we try to put all these processes into a simulation environment that uses hardware in the loop (HIL). For manufacturing, we do the same thing, that is, before we produce the part we put the digital version of it into the simulation, such as a digital version of a milling machine, or grinding machine, etc.

TB: Is that like a digital twin?

DJ: Yes. In digital twins, we would use the complete manufacturing line but you could take a piece of it. For example, in our laboratory at IMaDE, we have a conveyor belt that is equipped with several robots and diagnostic tools and vision systems. In this setup, we could not afford to have a milling machine added to that conveyor belt. Instead, we added it into the manufacturing loop virtually! So, while it is not a digital twin completely, there is hardware in the loop that is virtual while other parts are all physical.

TB: A virtual slice.

DJ: Exactly. And that is what I think manufacturing is going through with Industry 4.0 and now with Industry 5.0 on the horizon. The research that I am pursuing now is regarding the human robot interaction side of this operation.

From a physiological and psychological point of view, we have done a lot of research with communication experts. At the UA College of Engineering, for example, one of our funded projects by NSF is looking at how a robot can communicate with people. In this specific project, we are using robots to help law enforcement officers (LEO) and potentially first responders when communicating with suspects, civilians, or even people who may need help. One of the key features of our research is to enable effective two-way communications and NOT necessarily robots with diffusing weapon capabilities. Our underlying goal, instead, was focused on opening a channel of communication.

We brought in a communication expert and asked, “How would you talk and how would you mediate the situation?” Our robots are designed based on that premise. In interviews with civilians and police officers, we discovered they didn’t want this robot to look like a human. ‘You don’t want to fool us, we know it is a robot. Just show it like a robot,’ with the face that looks like a tablet, and a hand that looks like a robotic arm. That is what we have been trying to do, to try to create that environment. The robot has wheels, arms, hands, and a flat face, but the first intention was to equip it with two-way audio and visual communications in order to mediate the situation.

When we take that same capability to the manufacturing side, you want a worker to be able to work alongside a robot. One of my research areas has been to reduce the stress level in people working with robots, and try to have robots to “read” people’s minds and detect people’s stress levels. I can give you two scenarios as examples. In one scenario, imagine the person is working alongside a robot that does not sense the presence of the person. In the second scenario, using the same robot and the same person, we have the person wearing a wristband that detects heart rate, blood pressure, and some other physiological signals such as skin conductivity to measure perspiration based on skin conductance. The robot is getting all the physiological signals, and also processing eye detection signals to understand the distance between the robot and person. The robot is using these inputs to adjust its operations and maintain the correct distance from the person and adjust its operation speed based on the person’s level of stress. The robot can determine the right speed with which to process movements, rather than move too fast or too slow. As soon as the person spends a few minutes working with the robot, the robot can begin to speed up to match the person’s performance.

This is what I think the future should look like. We have to maintain operator safety, that is the robot should be able to sense when the operator is concerned and when it should back off or start stopping. That type of research shows we are not just replacing people with machines and robots, instead, we are actually inviting people to change their perception, education and training and shape the future by creating a mixture that leverages human intelligence and robotic learning. This is a new landscape that is being explored.

TB: That is cool stuff. Back in 2014, we looked at wearable technology, Google Glass, augmented reality, visualization and some other technologies as having a big impact on aircraft MRO processes. Sounds like this is an awesome representation of a wearable technology application for robotics. It’s a natural evolution.

We are going to ask you for a different perspective now. What technologies do you see today that are commercially available but being severely underutilized in the manufacturing space? What is something that our readers could be doing right now to get benefits from automation?

DJ: The most important part is the lack of using controlled decisions. This exists in a lot of manufacturing devices and tools, but a lot of people are still running those tools and manufacturing processes in a “teach” mode, or open-loop operations, or what we call a feed-forward operation. Feed-forward means that you plan your motion. You say I want to go to point A, then point B, then C, and then D, and you are assuming nothing is going to change as you move from one point to the next. For example, you are going to walk between two buildings and you have done it many times so you know that it only takes 5 minutes to walk between those buildings.

TB: Oh you mean like MIB, Houser, and Hardaway [Editor’s Note: three buildings at the UA College of Engineering]?

DJ: Yes [laughing]. And going on that walk you run into a professor and start talking and chatting, that’s called disturbance – well in this case in a good way, right? In a lot of manufacturing tools, they are equipped with features that are collecting data continuously and in a timely manner and on any time-scale you want. However, they are not using ‘controlled’ operations, or what we call feedback control. So for our walking example, if I see that I am going to be stopped by Professor X for 2 minutes, and if this information becomes known to me, I can adjust my walking in my next section instead of going through the plan I had prior to start my walk. If I had not considered this disturbance visit already in place, I wouldn’t be able to complete my walk to reach my target point D at the appropriate time. Understanding and reacting to a disturbance is what we call feedback control with tunable features. So that’s one of the places where I’m thinking industry should be focused on harvesting benefits of a robust adaptive feedback control, in a lot of manufacturing processes that do not fully utilize this capability. As a recent example, we have been doing research through collaborations led by two experts in additive manufacturing in our department. This additive manufacturing tool was a stir friction welding additive manufacturing where a rotating spindle required control of friction and force resulting in growth of material. All of the previous machines did not have a tunable and adaptive feedback control using on-the-fly sensor information from temperature, heat, contact force, and other moving parts to just name a few. Our modified system has been able to enable a much more optimized operation.

When I got involved in this process, I asked why are we not using all the sensors available to us? This was as recent as last year, and indicated that a lot of manufacturing process capabilities are available but we don’t take full advantage of them. So today, we may not need to invent a new manufacturing technique, instead we can go to an existing manufacturing technique and add a software level operation to optimally control the process. In a lot of places you can do this at the software level, without bringing in any hardware. All you have to do is adjust the operating and commanding of the process. In this specific project that we were involved in, we realized that there were no force or heat sensors used in a feedback control manner. So for that project, we started adding a force sensor and heat and temperature measurement sensors. While the original equipment cost was somewhere between $500k to $1 million, the added hardware and modified software were at the noise level and only a few thousand dollars.

TB: So $2k or $3k for those items to get the improvement?

DJ: Yes, and the software. I think this is the number one key in my expert opinion on what has been overlooked in the manufacturing equipment. I think the second important factor would be the data that you can collect that are readily available. We have the technology today, we have the AI (artificial intelligence) to decide about the health of the device using data collected. We may not need a new manufacturing technique, but instead a robust adaptive control addition to predict the life of the equipment and enable predictive maintenance may suffice.

What is being done in Industry 4.0 and Industry 5.0, they are collecting data and instrumenting the robot even more by adding various sensors. They might add, for example, an inexpensive camera as well, and integrate that with all the other measurements. With all that information, you will be able to get the state of health of the robot and try to predict when it will be failing. If the predictive analytics say the arm of the robot is going to fail in two months, for example, you start placing the order for that part and schedule the preventive maintenance to change out the arm without disruption. That’s what I’m thinking are the top two factors we need to take advantage of, as I’m witnessing by my own research and funded projects.

TB: Great stuff Dr. Jalili. Now I want you to look out over the horizon, shift forward five years, give us a forward looking perspective of something in its infancy, that we are just reading about or hearing about, something that industrial companies should have on their radar for consideration.

DJ: As we are only beginning to scratch the surface, and as I talked about the human side of manufacturing, we will be moving towards what is referred to as discrete manufacturing. The designer, for example, could be in country A, the robot is in Country B, and the products to sell and deliver are in Country C. All of these are coming together in a virtual environment that could bring everybody into the same virtual and cloud-based space where they can interact. So everyone can see the product using their goggles, in a virtual and augmented reality, and can talk with each other to predict what could be done collaboratively. What we do today with Zoom is similar. I could not have imagined writing a major multi-investigator proposal with five different universities on only my computer and from my home, during the Covid 19 pandemic. Everyone was sharing their presentations and working at the same time and commenting at the same time. That was made possible and now everybody is enjoying this environment where we can just do all of this collaboratively – sharing, presenting, producing. Now imagine that you can actually take this to the manufacturing world and create a virtual place where everyone can join and start testing their product, and testing that process virtually.

Again, people are physical, but they join that virtual environment where that manufacturing process is actually built and viewed step-by-step as you are in that domain. We have a real example where we used a custom trailer that had everything in it, including power generators, air conditioning, etc. to power a small ‘manufacturing plant’ inside the trailer. You can see everything and go through the interactive steps, and come out with a product in your hand that has been produced.

We have to get that trailer to high schools, and that is what we have been doing, and say ‘Hey, students, manufacturing is not about an oily, dirty factory where you walk around with a lot of large and ugly machines, gearboxes and grease on the floor. No it could be as nice as computer-based manufacturing like this trailer. Go inside and be amazed.’ We teach undergraduates and high school students that this is the future of manufacturing and we attract them to our program. But imagine that you create a virtual version of this one, that you don’t have to have a trailer that moves around. Then you would create a virtual environment where everyone can join and run through manufacturing education and research by experiencing an avatar-like environment generated by a headset.

TB: Thank you for giving us a great perspective today Dr. Jalili. Good luck in your research!

DJ: You’re welcome. Thank you.

Previous articleBuilding the Case for Automation
Next articleDriving Additive Manufacturing Benefits
Trey Bell
William "Trey" Bell is a graduate of the University of Alabama with a Bachelor of Aerospace Engineering. He has extensive experience in industry consulting, having worked with leading companies in a range of businesses including Aerospace and Defense, Medical Device, and High Tech. From product development to manufacturing, Trey has experience with methodologies, CAD/CAM, and software such as visualization and analytics software, ERP, system prognostics, and CRM. He has also worked as a CxO for multiple entities in different industries and has a personal consulting company. Trey runs a video streaming company called tr3dio and enjoys creating graphics and building new technology stacks.