Boston Dynamics

Started by
53 comments, last by Calin 2 years, 4 months ago

Calin said:
The problem with projects like Boston Dynamics is that what they have is a blind robot. The robot doesnt have cameras to see the environment so whatever it does, it does so blindfolded.

IDK what they use for sensing, but i doubt those robots run just blindly. Sensing and modeling environment is a key problem of mobile robots.

My ragdoll so far is blind indeed, it's only sense is reaction to contacts, so sense of touch. And it knows about velocities or center of mass, ofc.
I'm unsure how to extend this. Rendering a small framebuffer with depth is expensive to generate and expensive to analyze. A volume of signed distance would be even more expensive to generate but easy to analyze.
Then we have the current solution, which is to tag the environment with walkable surfaces and use pathfinding, plus some RT for visibility checks, plus range queries to list potential interactions or dynamic objects nearby. That's fast and good enough for current game mechanics, but we surely don't want to stick at this forever.

What would you expect from simulating vision? I thought about precise cover mechanics or hide and seek in a shooter.

Advertisement

JoeJ said:
but i doubt those robots run just blindly

there might be some sensing for that expensive dog robot to prevent him from running into a wall. But sensing there is in a residual amount most likely. Walking / Running in balance is one thing having a biped take corners ( or have limb - eye/camera coordination) is another level of complexity which involves navigation. Thats why Im ready to bet they haven`t even scratched the surface with sensors because if it were otherwise we would have seen much more from them. From the videos I`ve seen no sensors are required at all in theory (except sensing push and pull forces)

My project`s facebook page is “DreamLand Page”

The whole point of Boston Dynamics is that they can sense the world and change their gait/foot placement because of it.

This is super obvious when you study the videos of Atlas doing parkour, for example.

They even describe the cameras (both RGB and depth) online:

enum Bool { True, False, FileNotFound };

hplus0603 said:
The whole point of Boston Dynamics is that they can sense the world

this is probably one of their more recent achievements. I don`t recall seeing robots jumping previously.

My project`s facebook page is “DreamLand Page”

Robots have been jumping for years. But not in such elaborate patterns.

In the first video hplus0603 has posted the robot is taking turns to the left/right and moves across several sections of the challenge course, each with different obstacle types. Im sure he`s not remote controlled, so if he is autonomous indeed how does he keep track of his position in the obstacle course? Hes obviously set to follow the obstacles in a particular order, which means he has a map that he is following. That`s my only explanation on how he knows where to go next. So does anyone know if he gets his location by triangulating his position from sending signals to nearby radio emitters/receivers, or is he 100% autonomous and has no exchange of information with the other world ( relying only on his cameras/sensors)

My project`s facebook page is “DreamLand Page”

@jon1 Neural networks are a tool to achieve a universal problem solver, they are on their way. There is all sorts of research going on about providing brains with new kinds of inputs and seeing how they adapt to this fundamentally new input into their universal problem solver. One MIT student is feeding stock market data into the back of volunteers to see if they can learn about the stock market, for instance.

The state of neural networks and ai is moving toward problem solving techniques that humans aren't even going to easily comprehend if at all; it is going to be fundamentally new problem solving technology even beyond that of mere neural network brains. Brains and neural networks are going to be obsolete, but for now they are the pinnacle of universal problem solving. To say that neural networks are not universal problem solvers is to simply define one that is poor at it, and does not describe what is possible.

I don't know why you guys think the boston dynamics robots are blind, they have visual sensors. Lidar is on atlas.

https://www.businessinsider.com/atlas-robot-sensor-system-2014-4

@Calin

Calin said:
So does anyone know if he gets his location by triangulating his position from sending signals to nearby radio emitters/receivers

It is 100% autonomous in this case.

Some BD tools can use GPS for high-level localization, but that doesn't give you precise enough information to know how the obstacles are slanted under your foot, even IF you have pre-surveyed the area. And the shipped BD robots (like Spot) navigate in un-surveyed, fully novel areas.

h8CplusplusGuru said:
they are on their way

I think we will have fusion energy figured out before we have automated universal problem solving figured out.

I don't doubt that we will, eventually, figure out both of those, but I think they are further away than you might think. I have very little direct experience with fusion, but I do have direct experience with a variety of neural network architectures and models, so that part, I'm pretty sure about: We're nowhere near.

(Then, when it comes to consciousness, there's arguments about requiring actual agency in the world and some actual reward/"desire" to motivate actual problem solving, which I'm sure can be solved, but models are nowhere near the point where those challenges are even starting to surface.)

enum Bool { True, False, FileNotFound };

This topic is closed to new replies.

Advertisement