Latest papers: Difference between revisions
RobowaifuDev (talk | contribs) No edit summary |
RobowaifuDev (talk | contribs) No edit summary |
||
Line 4: | Line 4: | ||
== Recent papers == | == Recent papers == | ||
{{Protip|You can use [https://huggingface.co/sshleifer/distilbart-cnn-12-6 sshleifer/distilbart-cnn-12-6] and [https://scitldr.apps.allenai.org/ SciTLDR] to help with summarizing papers. Check the [[Template:Paper|paper template]] for usage instructions.}} | {{Protip|You can use [https://huggingface.co/sshleifer/distilbart-cnn-12-6 sshleifer/distilbart-cnn-12-6] and [https://scitldr.apps.allenai.org/ SciTLDR] to help with summarizing papers. Check the [[Template:Paper|paper template]] for usage instructions.}} | ||
=== August 2021 === | === August 2021 === | ||
Line 10: | Line 11: | ||
==== [[Simulation]] ==== | ==== [[Simulation]] ==== | ||
{{paper|title=iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks|url=http://svl.stanford.edu/igibson/|tldr=iGibson 2.0 is a novel simulation environment using [[Bullet]] that supports the simulation of a more diverse set of household tasks through three key innovations. Firstly, it supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks. Second, it implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked. Third, the simulator can sample valid physical states that satisfy a logic state. This functionality can generate potentially infinite instances of tasks with minimal effort from the users.|publication=arXiv:2108.03272|authors=Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese|year=2021}} | {{paper|title=iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks|url=http://svl.stanford.edu/igibson/|tldr=iGibson 2.0 is a novel simulation environment using [[Bullet]] that supports the simulation of a more diverse set of household tasks through three key innovations. Firstly, it supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks. Second, it implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked. Third, the simulator can sample valid physical states that satisfy a logic state. This functionality can generate potentially infinite instances of tasks with minimal effort from the users.|publication=arXiv:2108.03272|authors=Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese|year=2021}} | ||
Line 21: | Line 21: | ||
==== [[Multimodal learning]] ==== | ==== [[Multimodal learning]] ==== | ||
{{paper|title=Multimodal Few-Shot Learning with Frozen Language Models|url=https://arxiv.org/abs/2108.03880|tldr=When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, the authors present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language).|authors=Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill|publication=arXiv:2106.13884|year=2021}} | {{paper|title=Multimodal Few-Shot Learning with Frozen Language Models|url=https://arxiv.org/abs/2108.03880|tldr=When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, the authors present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language).|authors=Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill|publication=arXiv:2106.13884|year=2021}} | ||
==== [[Optimizers]] ==== | ==== [[Optimizers]] ==== | ||
{{paper|title=A Generalizable Approach to Learning Optimizers|url=https://arxiv.org/abs/2106.00958|tldr=Learning to update optimizer hyperparameters instead of model parameters directly using novel features, actions, and a reward function.|authors=Diogo Almeida, Clemens Winter, Jie Tang, Wojciech Zaremba|publication=arXiv:2106.00958|year=2021}} | {{paper|title=A Generalizable Approach to Learning Optimizers|url=https://arxiv.org/abs/2106.00958|tldr=Learning to update optimizer hyperparameters instead of model parameters directly using novel features, actions, and a reward function.|authors=Diogo Almeida, Clemens Winter, Jie Tang, Wojciech Zaremba|publication=arXiv:2106.00958|year=2021}} | ||
Line 36: | Line 34: | ||
==== [[Fine-tuning]] ==== | ==== [[Fine-tuning]] ==== | ||
{{paper|title=The Power of Scale for Parameter-Efficient Prompt Tuning|url=https://arxiv.org/abs/2104.08691|tldr=In this work, the author's explore "prompt tuning" a simple but effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks.|publication=arXiv:2104.08691|year=2021|authors=Brian Lester, Rami Al-Rfou, Noah Constant}} | {{paper|title=The Power of Scale for Parameter-Efficient Prompt Tuning|url=https://arxiv.org/abs/2104.08691|tldr=In this work, the author's explore "prompt tuning" a simple but effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks.|publication=arXiv:2104.08691|year=2021|authors=Brian Lester, Rami Al-Rfou, Noah Constant}} | ||
=== March 2021 === | |||
==== [[Computer vision]] ==== | |||
{{paper|title=NeX: Real-time View Synthesis with Neural Basis Expansion|url=https://nex-mpi.github.io/|publication=arXiv:2103.05606|authors=Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, Supasorn Suwajanakorn|tldr=The authors present NeX, a new approach to novel view synthesis based on enhancements of multiplane image (MPI) that can reproduce next-level view-dependent effects -- in real time. The method achieves the best overall scores across all major metrics on these datasets with more than 1000× faster rendering time than the state of the art.|year=2021}} | |||
=== October 2020 === | === October 2020 === | ||
==== Computer vision ==== | ==== [[Computer vision]] ==== | ||
{{paper|title=GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering|url=https://arxiv.org/abs/2010.04595|tldr=General Radiance Fields construct an internal representation for each 3D point of a scene from 2D inputs and renders the corresponding appearance and geometry of any 3D scene viewing from an arbitrary angle.|authors=Alex Trevithick, Bo Yang|publication=arXiv:2010.04595|year=2020}} | {{paper|title=GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering|url=https://arxiv.org/abs/2010.04595|tldr=General Radiance Fields construct an internal representation for each 3D point of a scene from 2D inputs and renders the corresponding appearance and geometry of any 3D scene viewing from an arbitrary angle.|authors=Alex Trevithick, Bo Yang|publication=arXiv:2010.04595|year=2020}} | ||
Revision as of 06:34, 13 August 2021
This page serves to collect notable research papers within the past two years related to robotics and artificial intelligence. Feel free to add new papers to the list and discuss any papers on the talk page.
Recent papers
August 2021
Computer vision
NeuralMVS: Bridging Multi-View Stereo and Novel View Synthesis (arXiv:2108.03880)
tl;dr Multi-view stereo is a core task in 3D computer vision. NeRF methods do not generalize to novel scenes and are slow to train and test. We propose to bridge the gap between these two methodologies with a novel network that can recover 3D scene geometry as a distance function.[1]
Simulation
iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks (arXiv:2108.03272)
tl;dr iGibson 2.0 is a novel simulation environment using Bullet that supports the simulation of a more diverse set of household tasks through three key innovations. Firstly, it supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks. Second, it implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked. Third, the simulator can sample valid physical states that satisfy a logic state. This functionality can generate potentially infinite instances of tasks with minimal effort from the users.[2]
July 2021
Audio processing
SoundStream: An End-to-End Neural Audio Codec (arXiv:2107.03312)
tl;dr A novel neural audio codec that can efficiently compress speech, music and general audio at bitrates normally targeted by speech-tailored codecs. SoundStream at 3kbps outperforms Opus at 12kbps and approaches EVS at 9.6kbps.[3]
June 2021
Multimodal learning
Multimodal Few-Shot Learning with Frozen Language Models (arXiv:2106.13884)
tl;dr When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, the authors present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language).[4]
Optimizers
A Generalizable Approach to Learning Optimizers (arXiv:2106.00958)
tl;dr Learning to update optimizer hyperparameters instead of model parameters directly using novel features, actions, and a reward function.[5]
May 2021
Memory
Not All Memories are Created Equal: Learning to Forget by Expiring (arXiv:2105.06548)
tl;dr The authors propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information, which enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently.[6]
April 2021
Fine-tuning
The Power of Scale for Parameter-Efficient Prompt Tuning (arXiv:2104.08691)
tl;dr In this work, the author's explore "prompt tuning" a simple but effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks.[7]
March 2021
Computer vision
NeX: Real-time View Synthesis with Neural Basis Expansion (arXiv:2103.05606)
tl;dr The authors present NeX, a new approach to novel view synthesis based on enhancements of multiplane image (MPI) that can reproduce next-level view-dependent effects -- in real time. The method achieves the best overall scores across all major metrics on these datasets with more than 1000× faster rendering time than the state of the art.[8]
October 2020
Computer vision
GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering (arXiv:2010.04595)
tl;dr General Radiance Fields construct an internal representation for each 3D point of a scene from 2D inputs and renders the corresponding appearance and geometry of any 3D scene viewing from an arbitrary angle.[9]
September 2020
Summarization
Learning to Summarize with Human Feedback (arXiv:2009.01325)
tl;dr Human feedback models outperform much larger supervised models and reference summaries on TL;DR.[10]
December 2019
Meta-learning
Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data (arXiv:1912.07768)
tl;dr This paper investigates the intriguing question of whether learning algorithms can automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. GTNs are deep neural networks that generate data and/or training environments that a learner trains on for a few SGD steps before being tested on a target task. It then differentiates through the entire learning process via meta-gradients to update the GTN parameters to improve performance on the target task.[11]
Older papers
See also
References
- ↑ Radu Alexandru Rosu, Sven Behnke. NeuralMVS: Bridging Multi-View Stereo and Novel View Synthesis. arXiv:2108.03880, 2021.
- ↑ Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese. iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks. arXiv:2108.03272, 2021.
- ↑ Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, Marco Tagliasacchi. SoundStream: An End-to-End Neural Audio Codec. arXiv:2107.03312, 2021.
- ↑ Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill. Multimodal Few-Shot Learning with Frozen Language Models. arXiv:2106.13884, 2021.
- ↑ Diogo Almeida, Clemens Winter, Jie Tang, Wojciech Zaremba. A Generalizable Approach to Learning Optimizers. arXiv:2106.00958, 2021.
- ↑ Sainbayar Sukhbaatar, Da Ju, Spencer Poff, Stephen Roller, Arthur Szlam, Jason Weston, Angela Fan. Not All Memories are Created Equal: Learning to Forget by Expiring. arXiv:2105.06548, 2021.
- ↑ Brian Lester, Rami Al-Rfou, Noah Constant. The Power of Scale for Parameter-Efficient Prompt Tuning. arXiv:2104.08691, 2021.
- ↑ Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, Supasorn Suwajanakorn. NeX: Real-time View Synthesis with Neural Basis Expansion. arXiv:2103.05606, 2021.
- ↑ Alex Trevithick, Bo Yang. GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering. arXiv:2010.04595, 2020.
- ↑ Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano. Learning to Summarize with Human Feedback. arXiv:2009.01325, 2020.
- ↑ Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O. Stanley, Jeff Clune. Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data. arXiv:1912.07768, 2019.