Skip to content

Robotics

Hod Lipson: Professor in Engineering and Data Science at Columbia University

Prof. Hod Lipson

“Hod Lipson is a professor of Engineering and Data Science at Columbia University in New York, and co-author of the award winning book “Fabricated: The New World of 3D printing” and “Driveless”: Intelligent cars and the road ahead“. Lipson has co-authored over 300 publications that received over 30,000 citations to date. He has co-founded four companies, and is frequent keynoter both in industry and academic events. Hod directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative.

His work focuses on robotics and artificial intelligence. He and his students love designing and building robots that do what you’d least expect robots to do: Self replicate, self-reflect, ask questions, and even be creative. Hod’s research asks questions such as: Can robots ultimately design and make other robots? Can machines be curious and creative? Will robots ever be truly self-aware? Answers to these questions can help illuminate life’s big mysteries.”

Below is a transcript of the Q&A session

Although I am not from this AI background, what I could understand that you are doing, is it not equal to fitting curves and equations? For example, datasets that you get and then you derive an equation. Is it not, in simple terms, just curve fitting?

I think it is, it is simple curve fitting, that’s what all science is.

We can do the same in Excel if we are given a dataset; if it is not that complicated because Excel is not designed to do that kind of computation, to fit a curve to get a second order, a third order, or a fourth order quadratic equation.

No, no it’s very different than Excel because in Excel you need to know what the formula is to fit it.

No, if we have a dataset, we plot a graph and then we can fit an equation to the curve.

But you need to know what equation to fit. You have to give it an equation, you have to say, it’s a linear equation, or maybe it’s a polynomial. So that’s a big difference. Here we didn’t know what the equation is, it found the equation. And this is the main difference.


You have this data and generate a function, am I right in understanding that you don’t really know what those variables are? Is that the problem?

No, in the first example, you know what the variables are because you measure them, but you don’t know what the equation means. When we look at the double pendulum, we know that Hamiltonian represents energy, we have an intuition about why the Hamiltonian does what it does. When you get an equation for the system you don’t understand, you don’t know why that equation is invariant.

If we take F = ma, we know what force means, and we know that…

Right, but F = ma is hard because you already know so much about this equation, it’s hard for you to understand. But if I give you an equation for fire flames, and it tells you here is how this pixel relates to that pixel over time from a camera. Here is the equation. So, you look at this equation, but what does it mean? Okay, it’s colours. But does this represent thermodynamics? What is behind this equation? What is the governing principle?

So, the generator is opaque?

Exactly. You find the invariance. There are some very interesting philosophical papers that scientific inquiry is all about finding invariance. Everything is changing all the time, but the things that are not changing, F always equals am, E always equals mc2. Finding things that are not changing is science, finding the laws. But we don’t know what they mean. If you look at E = mc2, our system would tell you E is proportional to m, and c2 is a constant, and you wouldn’t know that c, that’s speed of light, is squared, you would just see a constant there. And to understand that that is actually speed of light squared, and then why that is the case, that’s where the scientist comes in. The system finds the empirical relationship.


My question is about robotics. You mentioned that soft robots are difficult to make, so you went into 3D printing. But of course, there are some people who go more into tissues and more the biological side. Why did you choose 3D printing, and do you see some advantages or disadvantages, because as I understand it, it is more about the shapes and capabilities?

A specific goal in my lab is to build the robots, we don’t want just to simulate them. Soft robots are much more interesting and rich space. Like you said, biology is soft, titanium robots can only do certain things, but when we talk about soft robots, they are so much more complex, so much richer, so much more versatile, the design space is much larger. But manufacturing soft robots is a lot harder. And so, in a way, the field of soft robotics is stuck, because we can imagine, simulate, and design much more complicated things, but we can’t build those things. And so, in order to bridge that gap, there is a lot of research in making soft actuators, in manufacturing techniques for soft robots. And I think that that field needs to develop a few more years before we can start manufacturing all the robots that we can design. There is a big gap that needs be caught up. Now, biology is already soft but it is very hard to manufacture biological robots. There is research, actually, by Josh Bongard who is working on building what you call xenobots, biological robots that are computationally evolved. But that’s a very nascent work.

Exactly, I was talking about this type of robots. So, if you can build these soft robots, where would you see their application area?

That is a very different question, applications of soft robots. I think that soft robots, when it comes to human interaction, they are much more important. If you have robots that interact with children, with people. In fact, one of the areas that I find most exciting for soft robotics is robotic faces. If you look at our ability to make facial expressions, smile, frown, our face is soft and has, I don’t know, a hundred muscles in it. And that ability is very very important for human communication. So, we are building soft robots that are learning to imitate human facial expressions and you cannot do that with a rigid robot. It has to be soft in order to communicate with the human, to smile, you have to be soft. I think human interaction is where this compliance needs to happen, but, of course, there are many other reasons. People talk that soft robots are more robust, they don’t break as easily, they can crawl into tight spaces. There are many other reasons, but I think mostly for human-robot interaction.

Did you try to apply this method to address unmodelled effects in robot simulators?

Yes, so it is a great question. In fact, I believe that in the future this is how we are going to simulate the robots. We are not going to use off-the-shelf simulators like we do today. Instead, we are going to throw software onto a robot and say, “Robot, simulate yourself.” And the robot will learn to simulate everything it needs to know. And we have done some simple experiments. And you can see that when a robot simulated itself, it learns to simulate things, it cares about. We, engineers, obsess about things like mass distribution, geometry. But when a robot simulates itself, it spends a lot of time simulating things like sensor lagged times, for example. If there is a delay in a sensor, that affects the robot control more than if there is a discrepancy in the mass of a robot. When robots model themselves, they begin to model things that we, engineers, do not necessarily pay attention to. So, that is a great question, robots modelling themselves they pay attention to aspects that are generally unmodelled, as you say, by engineers.

To what extent do you think we will first see those creative systems before we see the systems that are communicative, where creative is coming up with something that is novel and original, and communicative, something that can hold a naturalistic conversation?

That is a great question. AI can generate novel ideas, creativity already today, not just art and music, but AI can create antennas and proteins better than humans, certainly better than most humans.   We see AI being creative in exhibiting engineering creativity already that exceeds the ability of most engineers. If you are an engineer, maybe you are sad to hear this, but I am very excited, even though I am an engineer. It means it is an incredible tool, but there are other areas where AI is not making progress very much, and one of them is conversation. AI can recognize cats and dogs, design antennas, fold proteins, and play chess and Go, but AI cannot hold a conversation. And that despite everything you are seeing with GPT-3 and all that stuff that can write essays and write poems that itself does not understand. It can do amazing things, but not hold a conversation. And frankly, nobody knows how to do that we are not close to it even. Alexa can answer questions like “What’s the weather tomorrow?” and “Where is the best pizza near me?”, but not hold a conversation. And in some sense, if you can hold a conversation with another human, you are still a way ahead of AI. 

A deep dive into the topic:
Chen, B., Huang, K., Raghupathi, S., Chandratreya, I., Du, Q., & Lipson, H. (2021). Discovering State Variables Hidden in Experimental Data. arXiv preprint arXiv:2112.10755. [pdf]