Synergy Effect Inc

BUSINESS OPTIMIZATION AND DIGITALIZATION CONSULTANCY SERVICES AGENCY

BLOG

Assembling an IKEA Chair Without Having a Meltdown

Jun 24, 2020 | Artificial Intelligence (AI), Industry 4.0 / IoT / IIoT, Smart Manufacturing

And just like that, humanity draws one step  closer to the singularity, the moment when the machines grow so  advanced that humans become obsolete: A robot has learned to autonomously assemble an Ikea chair without throwing anything or cursing the family dog.

Researchers report today in Science Robotics  that they’ve used entirely off-the-shelf parts—two industrial robot  arms with force sensors and a 3-D camera—to piece together one of those Stefan Ikea chairs  we all had in college before it collapsed after two months of use. From  planning to execution, it only took 20 minutes, compared to the human  average of a lifetime of misery. It may all seem trivial, but this is in  fact a big deal for robots, which struggle mightily to manipulate  objects in a world built for human hands.

SEE VIDEO HERE: 

 

To start, the researchers give the pair of robot arms some basic instructions—like those cartoony illustrations, but in code. This piece goes first into this other piece, then this other,  etc. Then they place the pieces in a random pattern front of the  robots, which eyeball the wood with the 3-D camera. So the researchers  give the robots a list of tasks, then the robots take it from there.

“What the robot does is to first figure out where exactly is the original position of the frame,” says engineer Quang-Cuong Pham  of Nanyang Technological University in Singapore, “and then calculates  the motion of the two arms automatically to go and grasp it and  transport it.”

As one arm grasps, say, the back of  the chair, the other arm picks up one of those infernal wooden pegs and  tries inserting it into a hole at the joint. That 3-D camera only has  an accuracy of a few millimeters, so the robot has to feel around. The  robot makes swirling motions around the hole, and when it feels the  force pattern change, it knows the peg has dropped in slightly, then  will apply more force to fully insert the thing.

This,  though, is where the robot tends to have problems. If it hasn’t scanned  the hole accurately enough, it might start swirling too far away—all  the way over the edge of the piece. “Then the changes in force pattern  are the same, so it would think that it has found the hole and it would  go and insert in the void,” says Pham.

Matters grow more complicated when the robot arms  have to grip either end of a larger piece of the chair. Not only does  each robot arm have to calculate its own grasping and lifting motion,  but it has to do so in consideration of the other arm. Think if you  grasped the ends of a baseball bat and swirled it around—each arm is  restricted by the movements of the other.

The  stakes are even higher for the robot because it’s making calculations as  it’s eyeballing the pieces, and has to commit to the plan it works out.  “If there is a small error, for example in the modeling of the object,  then the arms would fight each other, pulling this direction and the  other pulling in another direction,” says Pham. “If that happens the  robot will break the object.”

The solution is the  force sensors. “When we sense that the force is too much, then it would  change the motion of the robot to accommodate the errors,” Pham adds.

Pretty  impressive stuff, but the fact remains that the researchers have to do a  good amount of hand-holding. “This is a nice result,” says UC  Berkeley’s Ken Goldberg,  who works in robotic manipulation. “The big challenge is to replace  such carefully engineered special purpose programming with new  approaches that could learn from demonstrations and/or self-learn to  perform tasks like this.”

Which is exactly what the researchers are now working on. The next level of autonomy could be something called imitation learning,  in which a human either joysticks the robots to learn to do the tasks  in the right sequence, or the robot watches the human do it and then  mimics.

The ultimate goal? “The final level is we show the  robot an image of the assembled chair and then it has to figure it out,”  says Pham. “But I would envision this last step not in the next  probably five or six years or so.”

This kind of  advanced learning will be essential for robots going forward, because  there’s just no way engineers can program them to manipulate every  object they come across in the complicated world of humans. That means  facing challenges including but not limited to bringing down the tyranny  of flat-packed Ikea furniture.

Curse you, Stefan. Curse you.

 

 

 

Original Article by WIRED/ Matt Simon/2018

Share this Post:

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *