Summary: Neuro-Symbolic Concept Learner Research
ICLR (The International Conference on Learning Representations) is globally renowned for presenting and publishing cutting edge work and one piece of literature has recently caught our eye. The paper was so insightful we decided to share it in this blog with some corresponding highlights.
The work on NS-CL is a fantastic validation of SII's knowledge representation approach. A foundational principle of TKR (SII's knowledge representation hyper-graph) is the capability of knowledge discovery via hyper-edge manifestation. The NS-CL work in the ICLR 2019 paper advances a generalizable method to automatically generate new domains and compositions which can be materialized as hyper-edges within TKR. We are pleased that our engine, combined with other work in this arena, in combination with NVidia RAPIDS and Microsoft PROSE, can continue to serve as central building blocks which promote the advancement of automated learning.
Below is the abstract from the paper for your reference.
The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.
April 23, 2019