Press release Samsung News Room 08.11.19
Samsung Electronics is committed to leading advancements in the field of artificial intelligence (AI), with the hopes of ushering in a brighter future. To discuss what the future may hold for AI technology, and to address and overcome the technological challenges that researchers are currently facing, the company recently hosted its third annual Samsung AI Forum.
Predicting the Next Big Trends in AI
Modern AI technology is not only capable of analyzing data with algorithms, it’s also making strides toward achieving human-like cognition. With increases in computing power and advances in deep learning, AI technology is attempting to analyze data on its own, and learning to identify the most appropriate response for a given situation or context. The application of big data in deep learning is accelerating this trend.
While recent advancements have proven promising, the speakers at this year’s AI forum agreed that certain technological challenges remain unaddressed. Prof. Kyunghyun Cho of New York University put the technology’s current status in simple terms. “Imagine a hypothetical AI agent equipped with the current technology,” said Prof. Cho. “It has barely opened its eyes so that it can see and detect objects; it has barely opened its ears to listen to people and hear what they are saying; it has barely opened its mouth to speak short utterances; it is barely learning to move its limbs. In other words, we have just taken a tiny step toward building a truly intelligent machine – or a set of algorithms to drive such an intelligent agent.”
Prof. Noah Smith of the University of Washington expanded on this point, noting that “We’ve seen a lot of progress through the use of increasingly ‘deep’ neural networks trained on ever-larger datasets.” Prof. Smith also identified preparing efficient algorithms, reducing system construction costs and improving data learning methods as points that will need to be addressed in order to take AI technology to the next level.
The speakers also offered their opinions on where AI advancements should focus next, spotlighting things like wireless network controls, increasing AI’s autonomy, expanding AI’s applications in chemical and biological research, and streamlining interactions between humans and AI.
As Prof. Abhinav Gupta of Carnegie Mellon University explained, “In the past few years, we have made significant advancements in AI, but most of these advancements have been in solving specific tasks where lots of data and supervision are available. On the other hand, humans can perform hundreds of thousands of tasks, often with little to no supervision or data for them. This is the next frontier in AI: developing general purpose smart and intelligent agents without access to lots of data and supervision.”
Going Beyond Deep Learning
The first day of the forum was organized by the Samsung Advanced Institute of Technology (SAIT), which was established under the philosophy of fostering ‘boundless research for breakthroughs.’ Keynote sessions saw distinguished experts deliver presentations on deep learning research methods that are driving AI innovation.
Dr. Kinam Kim, President & CEO of Device Solutions at Samsung Electronics, kicked off the event by discussing Samsung’s motivation for bringing these renowned AI experts together under the same roof. “AI technology is already impacting various aspects of our society,” said Dr. Kim. “Here at the Samsung AI Forum, alongside some of the greatest minds in the industry, we will discuss and suggest directions and strategies for AI development with the hope of making the world a better place.”
Dr. Kim then yielded the stage to the day’s first distinguished speaker, Prof. Yoshua Bengio of the University of Montreal, who presented a lecture entitled ‘Towards Compositional Understanding of the World by Deep Learning.’
“Humans are much better than current AI systems at generalizing out-of-distribution,” Prof. Bengio explained. “We propose that learning purely from text is not sufficient, and we need to strive for learning agents that build a model of the world, to which linguistic labels can be associated.”
“The focus of future deep learning methodology,” he continued, “will be how the agent perspective common in reinforcement learning can help deep learning discover better representations of knowledge.”
Next, Prof. Trevor Darrell of the University of California at Berkeley presented an engrossing lecture entitled ‘Adapting and Explaining Deep Learning for Autonomous Systems.’ Prof. Darrell’s presentation spotlighted limitations of deep learning technology when it comes to developing autonomous driving systems, and introduced approaches to help overcome those issues.
As Prof. Darrell explained, “The learning of layered or ‘deep’ representations has recently enabled low-cost sensors for autonomous vehicles and the efficient automated analysis of visual semantics in online media. But these models have typically required prohibitive amounts of training data, and thus may only work well in the environment they have been trained in.”
Prof. Darrell then suggested approaches for developing explainable deep learning models, including introspective approaches that visualize compositional structures in a deep network, as well as third-person approaches that can provide a natural language justification for the classification decision of a deep model.
Afterward, Prof. Kyunghyun Cho of New York University took to the stage to deliver a riveting presentation entitled ‘Three Flavors of Neural Sequence Generation.’
“Standard neural sequence generation methods,” Prof. Cho explained, “assume a pre-specified generation order, such as left-to-right generation. Despite its wild success in recent years, there’s a lingering question of whether this is necessary, and if there is any other way to generate such a sequence in an order automatically learned from data – without having to pre-specify it, or relying on external tools.” He went on to introduce three alternatives that could potentially be used in sequence modeling: parallel decoding, recursive set prediction, and insertion-based generation.
Day one’s keynote speeches were followed by a panel discussion, moderated by the University of Montreal’s Prof. Simon Lacoste-Julien, that discussed establishing data sets for deep learning models. Prof. Sanja Fidler of the University of Toronto proposed a new tool that enables more detailed labeling of image data, while Prof. Jackie Cheung of McGill University suggested an alternative to replace automatic text summarization systems that are based on news articles.
Prof. Jia Deng of Princeton University outlined a method for establishing a new recognition system that enables AI to analyze data more efficiently, and Prof. Lacoste-Julien discussed ways to enhance the learning efficiency of generative adversarial networks (GANs).
Developing AI with Human-like Intelligence
The second day of the forum was organized by Samsung Research, the advanced R&D hub that leads the development future technologies for Samsung Electronics’ SET(end-products) Business. Day two was headlined by experts from a variety of fields who discussed how they’ve been applying AI in their ongoing research and revealed more innovative ways to address the technology’s current limitations.
DJ Koh, President and CEO of IT & Mobile Communications Division at Samsung Electronics, set the stage for day two’s illuminating presentations by sharing his perspective on the importance of Samsung’s investment in AI. “In this hyper-connected world, where everything is connected through 5G, AI and IoT technology, the company that delivers the most innovative experience will become the global business leader,” said Koh. “I believe that Samsung will lead the way by spearheading 5G, AI and IoT innovation.”
The first keynote of the day was delivered by Prof. Noah Smith of the University of Washington. Prof. Smith, who is recognized as one of the world’s foremost experts in designing data-centered algorithms for the autonomous analysis of human languages, introduced rational recurrent neural networks (RNNs), and outlined a path toward more efficient deep learning models for language processing.
“Current deep learning models are not based on real language understanding,” Prof. Smith explained. “Therefore, it is hard to explain the reasoning behind their actions. Experiments have found that rational RNNs can perform competitively as language models and for various classification tasks, especially with smaller amounts of annotated data, while using fewer parameters and training faster.”
Next, Prof. Abhinav Gupta of Carnegie Mellon University suggested a new model for empowering vision and robot learning. Prof. Gupta demonstrated how this large-scale self-learning mechanism goes beyond the limitations of supervised learning1, and discussed how to incorporate it into future AI agents.
The self-learning model introduced by Prof. Gupta is a methodology in which an AI system models the physical world through visual understanding, and gains an understanding of space and objects. The goal is to establish predictive models based on knowledge of physics, spatial perception and cause-and-effect relationships.
The ‘Invited Talk’ session that followed Prof. Gupta’s presentation discussed concrete methods for extending AI into more areas of our daily lives.
“It’s difficult for AI to make sense of the world using only the data that it’s been trained with, and when variables are involved, the data can produce a conclusion that’s completely different from what the developer intended,” said Prof. Vaishak Belle of Scotland’s University of Edinburgh.
Prof. Belle stressed the need for transparent and responsible AI development, and suggested that more efforts be directed toward 1) developing machine learning technology that’s accessible even to non-AI experts, 2) understanding biases in algorithms to ensure fair decision making, and 3) applying ethical principles to AI systems. The approaches he suggested were based on symbolic logic as it pertains to machine learning development.
Next, Prof. Joan Bruna of New York University introduced recent advancements in the development of deep learning models known as graph neural networks (GNNs). “A graph is an effective tool for integrating interactions involving users, devices and knowledge,” Prof. Bruna explained. “GNNs, which can represent graphs and learn and reason about relations are key for developing AI that’s capable of human-level intelligence.”
The sessions that followed were divided into two themes: ‘Vision & Image’ and ‘On-Device, IoT & Social.’ Both tracks featured fascinating presentations, delivered by a who’s who of AI experts, along with engaging discussions focused on AI technology and its applications.
Showcasing Samsung’s Latest AI Advancements
Each Samsung AI Forum offers attendees an opportunity to examine Samsung’s latest advancements in the field of AI research. This year, the company used the forum as a stage to unveil on-device AI translation technology that provides users with fast, reliable service even without an internet connection.
The forum also served as a showcase for the next generation of AI experts. Posters set up outside of the lecture hall offered attendees a chance to examine the research and dissertations of students in undergraduate and graduate schools across Korea.
Samsung’s vision for AI technology is focused on creating a user-centric ecosystem of devices and services that enhance users’ lives in meaningful ways. In hosting this event, the company hopes to do more than simply showcase the latest advancements in AI research, but actively seek innovative solutions to some of the technology’s most pressing challenges.