Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
Artificial intelligence is an area of research which offers tremendous potential. Even though AI is written by human coders, the technology is capable of learning on its own. A new effort by University of California researchers focuses on teaching AI systems to learn motion from YouTube clips. A very intriguing, though somewhat disturbing development.
Teaching Motion to AI Systems
There is always a need for ensuring artificial intelligence systems learn new things. Whether it is pure data or getting a feel for motion, the possibilities and opportunities are virtually limitless. Automating most of the teaching and learning process appears to be the next logical step in the evolution of AI as a whole. That may prove to be a lot easier said than done at first, although significant progress has been made recently.
Based on the recent developments outlined by the University of California Berkeley, automating AI learning can be done in many different ways. Researchers are currently exploring opportunities to teach AI about motion based on YouTube videos. The newly developed framework incorporates computer vision and reinforcement learning to train skills from a video. While it is initially linked to motion training, the concept can seemingly be expanded upon to encompass a lot of other traits and aspects as well.
So far, the researchers have successfully taught AI systems a set of over 20 acrobatic moves. This includes handsprings, backflips, and cartwheels, among other things. Since it does not require the use of motion capture video, this recent development is rather interesting. It can make a big impact on the way human action is converted to digital form in the future, such as methods used in the movie industry.
As one would come to expect, there is a lot more to this framework than meets the eye at first. When a YouTube video is queued up, the framework will try to determine which poses are being displayed. It then generates a simulated character which will mimic the movement through the embedded reinforcement learning. Additionally, the new framework can predict the potential outcome of a motion prior to even seeing it happen in the video.
Authors Jason Peng and Angjoo Kanazawa add:
“All in all, our framework is really just taking the most obvious approach that anyone can think of when tackling the problem of video imitation. The key is in decomposing the problem into more manageable components, picking the right methods for those components, and integrating them together effectively. However, imitating skills from videos is still an extremely challenging problem, and there are plenty of video clips that we are not yet able to reproduce: Nimble dance steps, such as this Gangnam style clip, can still be difficult to imitate.”
This new system would not be useful if the skills learned can’t be used for very different purposes. The researchers are confident their implementation can bring the skills learned to different characters, environments, and even train robots. Given how robots have evolved significantly over the past few years, it is evident this new research can play an increasing role of importance in the motion industry.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.