Latest papers: Difference between revisions

From Robowaifu Institute of Technology
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
{{tidyup|message=This page needs to be completely reformatted.}}
{{Tidyup|This page needs to be completely reformatted. Will be changing the tl;drs to be title text so you can hover over links to get the gist of them without clicking.}}
{{expand|This page needs papers! Papers for creating robowaifus!}}
{{Expand|This page needs papers! Probably should set up an automated system so I can just drop Twitter and Arxiv links.}}


This page serves to collect notable research papers within the past two years related to [[robotics]] and [[artificial intelligence]]. Feel free to add new papers to the list and discuss any papers on the [[Talk:Latest_papers|talk page]].
This page serves to collect notable research papers related to [[robotics]] and [[artificial intelligence]], particularly ones that can be used by hobbyists with minimal resources towards creating robowaifus. Feel free to add new papers to the list and discuss any papers on the [[Talk:Latest_papers|talk page]]. Papers posted on [https://alogs.space/robowaifu/ /robowaifu/] will also eventually appear here.
 
=== Search sites ===
 
* [https://www.semanticscholar.org/ SemanticScholar] - AI-powered research tool
* [https://paperswithcode.com/ PapersWithCode]
* [https://scholar.google.com/ Google Scholar]
* [https://arxiv.org/ arXiv]
* [https://you.com/ YouChat] - hit and miss from hallucinating a lot but sometimes finds good ones
* [https://www.jmlr.org/ Journal of Machine Learning Research]
* [https://huggingface.co/papers HuggingFace Daily Papers]
 
=== Social media sources ===
 
* [https://twitter.com/_akhaliq @_akhaliq]
* [https://twitter.com/abacaj @abacaj] (small language models)
* [https://twitter.com/DrJimFan @DrJimFan] (multimodal generalist agents)
* [https://twitter.com/gordic_aleksa @gordic_aleksa]
* [https://twitter.com/hardmaru @hardmaru]


== 2023 ==
== 2023 ==


==== [[Instruction tuning]] ====
=== Unsorted ===
 
{{Note|Need to summarize these papers into tl;dr. An automated system for this would be great.}}
 
==== May 2023 ====
 
* [https://arxiv.org/abs/2305.02412 Plan, Eliminate, and Track -- Language Models are Good Teachers for Embodied Agents]
* [https://arxiv.org/abs/2305.02301 Distilling Step-by-Step! Outperforming Larger Language Models with Less Training Data and Smaller Model Sizes]
* [https://arxiv.org/abs/2305.02297 Making the Most of What You Have: Adapting Pre-trained Visual Language Models in the Low-data Regime]
* [https://arxiv.org/abs/2305.01625 Unlimiformer: Long-Range Transformers with Unlimited Length Input]
* [https://arxiv.org/abs/2305.00833 Learning to Reason and Memorize with Self-Notes]
* [https://arxiv.org/abs/2303.07295 Meet in the Middle: A New Pre-training Paradigm]
 
==== April 2023 ====
 
* [https://arxiv.org/abs/2304.14108 DataComp: In search of the next generation of multimodal datasets]
* [https://arxiv.org/abs/2304.14402 LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions]
* [https://arxiv.org/abs/2304.13705 Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware]
* [https://arxiv.org/abs/2304.13013 Stable and low-precision training for large-scale vision-language models]
* [https://arxiv.org/abs/2304.12244 WizardLM: Empowering Large Language Models to Follow Complex Instructions]
* [https://arxiv.org/abs/2304.11490 Boosting Theory-of-Mind Performance in Large Language Models via Prompting]
* [https://arxiv.org/abs/2304.11267 Speed Is All You Need: On-Device Acceleration of Large Diffusion Models via GPU-Aware Optimizations]
* [https://arxiv.org/abs/2304.11062 Scaling Transformer to 1M tokens and beyond with RMT]
* [https://arxiv.org/abs/2304.10970 Can GPT-4 Perform Neural Architecture Search?]
* [https://arxiv.org/abs/2304.10592 MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models]
* [https://arxiv.org/abs/2304.08460 LongForm: Optimizing Instruction Tuning for Long Text Generation with Corpus Extraction]
* [https://arxiv.org/abs/2304.07327 OpenAssistant Conversations -- Democratizing Large Language Model Alignment]
* [https://arxiv.org/abs/2304.06939 Multimodal C4: An Open, Billion-scale Corpus of Images Interleaved With Text]
* [https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM]
 
==== Februrary 2023 ====
 
* [https://arxiv.org/abs/2302.10866 Hyena Hierarchy: Towards Larger Convolutional Language Models]
* [https://openreview.net/forum?id=__czv_gqDQt EfficientTTS 2: Variational End-to-End Text-to-Speech Synthesis and Voice Conversion]
* [https://arxiv.org/abs/2302.13971 LLaMA: Open and Efficient Foundation Language Models]
* [https://arxiv.org/abs/2302.13939 SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks]
* [https://arxiv.org/abs/2302.06868 SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains]
 
* [https://arxiv.org/abs/2302.12353 Autonomous Restructuring of Asteroids into Rotating Space Stations]
 
==== January 2023 ====
 
* [https://arxiv.org/abs/2301.13196 Looped Transformers as Programmable Computers]
* [https://arxiv.org/abs/2301.12314 Progressive Prompts: Continual Learning for Language Models]
* [https://arxiv.org/abs/2301.04589 Memory Augmented Large Language Models are Computationally Universal]
* [https://arxiv.org/abs/2301.04246 Generative Language Models and Automated Influence Operations: Emerging Threats and Potential Mitigations]
 
==== December 2022 ====
 
* [https://arxiv.org/abs/2212.08073 Constitutional AI: Harmlessness from AI Feedback]
* [https://arxiv.org/abs/2212.09689 Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor]
 
==== November 2022 ====
 
* [https://arxiv.org/abs/2211.11602 Improving Multimodal Interactive Agents with Reinforcement Learning from Human Feedback]
 
==== October 2022 ====
 
* [https://arxiv.org/abs/2210.12217 Entailer: Answering Questions with Faithful and Truthful Chains of Reasoning]
* [https://arxiv.org/abs/2210.11416 Scaling Instruction-Finetuned Language Models]
 
==== September 2022 ====
 
* [https://arxiv.org/abs/2209.15189 Learning by Distilling Context]
* [https://arxiv.org/abs/2209.07662 Dynamic Generation of Interpretable Inference Rules in a Neuro-Symbolic Expert System]
 
==== August 2022 ====
 
* [https://arxiv.org/abs/2208.07339 LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale]
 
==== May 2022 ====
 
* [https://arxiv.org/abs/2205.12910 NaturalProver: Grounded Mathematical Proof Generation with Language Models]
* [https://arxiv.org/abs/2205.06175 A Generalist Agent]
* [https://arxiv.org/abs/2205.05131v3 UL2: Unifying Language Learning Paradigms]
 
==== March 2022 ====
 
* [https://arxiv.org/abs/2203.13474 CodeGen: An Open Large Language Model for Code with Multi-Turn Program Synthesis]
* [https://arxiv.org/abs/2203.02155 Training language models to follow instructions with human feedback]
* [https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html In-context Learning and Induction Heads]
 
==== Februrary 2022 ====
 
* [https://arxiv.org/abs/2202.05262 Locating and Editing Factual Associations in GPT]
 
==== December 2021 ====
 
* [https://transformer-circuits.pub/2021/framework/index.html A Mathematical Framework for Transformer Circuits]
* [https://arxiv.org/abs/2112.05682 Self-attention Does Not Need <math>O(n^2)</math> Memory]
 
==== October 2021 ====
 
* [https://arxiv.org/abs/2110.07732 The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization]
* [https://arxiv.org/abs/2110.00296 Powerpropagation: A sparsity inducing weight reparameterisation]
 
==== September 2021 ====
 
* [https://arxiv.org/abs/2109.08603 Is Curiosity All You Need? On the Utility of Emergent Behaviours from Curious Exploration]
 
==== December 2020 ====
 
* [https://arxiv.org/abs/2012.06884 AIR-FI: Generating Covert Wi-Fi Signals from Air-Gapped Computers]
 
==== September 2020 ====
 
* [https://arxiv.org/abs/2009.01325 Learning to summarize from human feedback]
 
==== June 2020 ====
 
* [https://arxiv.org/abs/2006.15191 Is SGD a Bayesian sampler? Well, almost]
 
==== January 2020 ====
 
* [https://arxiv.org/abs/2001.04063 ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training]
 
==== December 2012 ====
 
* [https://ieeexplore.ieee.org/document/6386109 MuJoCo: A physics engine for model-based control]
 
==== September 2003 ====
 
* [https://arxiv.org/abs/cs/0309048 Goedel Machines: Self-Referential Universal Problem Solvers Making Provably Optimal Self-Improvements]
 
=== [[Instruction tuning]] ===
{{paper|title=Evol-Instruct: Mass-Producing Open-Domain Instruction Data with Varying Levels of Complexity using Large Language Models|url=https://arxiv.org/abs/2304.12244|tldr=The paper proposes a method called '''[[Evol-Instruct]]''' for creating large amounts of instruction data with different levels of complexity using a large language model (LLM) instead of humans. The generated data is used to fine-tune another LLM called WizardLM. Human evaluations show that Evol-Instruct instructions are better than human-created ones, and WizardLM is preferred over OpenAI ChatGPT for complex tasks. The study suggests that fine-tuning LLMs with AI-evolved instructions is a promising approach for improving their performance.|authors=Xu et al|publication=arXiv:2304.12244|year=2023}}
{{paper|title=Evol-Instruct: Mass-Producing Open-Domain Instruction Data with Varying Levels of Complexity using Large Language Models|url=https://arxiv.org/abs/2304.12244|tldr=The paper proposes a method called '''[[Evol-Instruct]]''' for creating large amounts of instruction data with different levels of complexity using a large language model (LLM) instead of humans. The generated data is used to fine-tune another LLM called WizardLM. Human evaluations show that Evol-Instruct instructions are better than human-created ones, and WizardLM is preferred over OpenAI ChatGPT for complex tasks. The study suggests that fine-tuning LLMs with AI-evolved instructions is a promising approach for improving their performance.|authors=Xu et al|publication=arXiv:2304.12244|year=2023}}


Line 76: Line 218:
== References ==
== References ==
<references />
<references />
 
__NOTOC__
[[Category:Research]]
[[Category:Research]]

Revision as of 00:49, 5 May 2023

This page requires tidying up!
This page needs to be completely reformatted. Will be changing the tl;drs to be title text so you can hover over links to get the gist of them without clicking.
This page requires expansion!
This page needs papers! Probably should set up an automated system so I can just drop Twitter and Arxiv links.

This page serves to collect notable research papers related to robotics and artificial intelligence, particularly ones that can be used by hobbyists with minimal resources towards creating robowaifus. Feel free to add new papers to the list and discuss any papers on the talk page. Papers posted on /robowaifu/ will also eventually appear here.

Search sites

Social media sources

2023

Unsorted

Need to summarize these papers into tl;dr. An automated system for this would be great.

May 2023

April 2023

Februrary 2023

January 2023

December 2022

November 2022

October 2022

September 2022

August 2022

May 2022

March 2022

Februrary 2022

December 2021

October 2021

September 2021

December 2020

September 2020

June 2020

January 2020

December 2012

September 2003

Instruction tuning

Evol-Instruct: Mass-Producing Open-Domain Instruction Data with Varying Levels of Complexity using Large Language Models (arXiv:2304.12244)

tl;dr The paper proposes a method called Evol-Instruct for creating large amounts of instruction data with different levels of complexity using a large language model (LLM) instead of humans. The generated data is used to fine-tune another LLM called WizardLM. Human evaluations show that Evol-Instruct instructions are better than human-created ones, and WizardLM is preferred over OpenAI ChatGPT for complex tasks. The study suggests that fine-tuning LLMs with AI-evolved instructions is a promising approach for improving their performance.[1]


2022

November 2022

Large Language Models Are Human-Level Prompt Engineers (arXiv)

tl;dr OpenReview version. Automatic Prompt Engineer (APE) is a method that can generate instructions automatically. It uses a pool of generated instruction candidates and evaluates the quality of them by the zero-shot performance of another LLM following a selected instruction.[2]


2021

PROTIP: You can use sshleifer/distilbart-cnn-12-6 to help with summarizing papers. Check the paper template for usage instructions.

August 2021

Computer vision

NeuralMVS: Bridging Multi-View Stereo and Novel View Synthesis (arXiv:2108.03880)

tl;dr Multi-view stereo is a core task in 3D computer vision. NeRF methods do not generalize to novel scenes and are slow to train and test. We propose to bridge the gap between these two methodologies with a novel network that can recover 3D scene geometry as a distance function.[3]


Simulation

iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks (arXiv:2108.03272)

tl;dr iGibson 2.0 is a novel simulation environment using Bullet that supports the simulation of a more diverse set of household tasks through three key innovations. Firstly, it supports object states, including temperature, wetness level, cleanliness level, and toggled and sliced states, necessary to cover a wider range of tasks. Second, it implements a set of predicate logic functions that map the simulator states to logic states like Cooked or Soaked. Third, the simulator can sample valid physical states that satisfy a logic state. This functionality can generate potentially infinite instances of tasks with minimal effort from the users.[4]


July 2021

Audio processing

SoundStream: An End-to-End Neural Audio Codec (arXiv:2107.03312)

tl;dr A novel neural audio codec that can efficiently compress speech, music and general audio at bitrates normally targeted by speech-tailored codecs. SoundStream at 3kbps outperforms Opus at 12kbps and approaches EVS at 9.6kbps.[5]


June 2021

Multimodal learning

Multimodal Few-Shot Learning with Frozen Language Models (arXiv:2106.13884)

tl;dr When trained at sufficient scale, auto-regressive language models exhibit the notable ability to learn a new language task after being prompted with just a few examples. Here, the authors present a simple, yet effective, approach for transferring this few-shot learning ability to a multimodal setting (vision and language).[6]


Optimizers

A Generalizable Approach to Learning Optimizers (arXiv:2106.00958)

tl;dr Learning to update optimizer hyperparameters instead of model parameters directly using novel features, actions, and a reward function.[7]


May 2021

Memory

Not All Memories are Created Equal: Learning to Forget by Expiring (arXiv:2105.06548)

tl;dr The authors propose Expire-Span, a method that learns to retain the most important information and expire the irrelevant information, which enables Transformers to scale to attend over tens of thousands of previous timesteps efficiently.[8]


April 2021

Fine-tuning

The Power of Scale for Parameter-Efficient Prompt Tuning (arXiv:2104.08691)

tl;dr In this work, the author's explore "prompt tuning" a simple but effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks.[9]


March 2021

Computer vision

NeX: Real-time View Synthesis with Neural Basis Expansion (arXiv:2103.05606)

tl;dr The authors present NeX, a new approach to novel view synthesis based on enhancements of multiplane image (MPI) that can reproduce next-level view-dependent effects -- in real time. The method achieves the best overall scores across all major metrics on these datasets with more than 1000× faster rendering time than the state of the art.[10]


October 2020

Computer vision

GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering (arXiv:2010.04595)

tl;dr General Radiance Fields construct an internal representation for each 3D point of a scene from 2D inputs and renders the corresponding appearance and geometry of any 3D scene viewing from an arbitrary angle.[11]


September 2020

Summarization

Learning to Summarize with Human Feedback (arXiv:2009.01325)

tl;dr Human feedback models outperform much larger supervised models and reference summaries on TL;DR.[12]


December 2019

Meta-learning

Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data (arXiv:1912.07768)

tl;dr This paper investigates the intriguing question of whether learning algorithms can automatically generate training data, learning environments, and curricula in order to help AI agents rapidly learn. GTNs are deep neural networks that generate data and/or training environments that a learner trains on for a few SGD steps before being tested on a target task. It then differentiates through the entire learning process via meta-gradients to update the GTN parameters to improve performance on the target task.[13]


Older papers

See also

References

  1. Xu et al. Evol-Instruct: Mass-Producing Open-Domain Instruction Data with Varying Levels of Complexity using Large Language Models. arXiv:2304.12244, 2023.
  2. Yongchao Zhou, Andrei Ioan Muresanu, Ziwen Han, Keiran Paster, Silviu Pitis, Harris Chan, Jimmy Ba. Large Language Models Are Human-Level Prompt Engineers. arXiv, 2022.
  3. Radu Alexandru Rosu, Sven Behnke. NeuralMVS: Bridging Multi-View Stereo and Novel View Synthesis. arXiv:2108.03880, 2021.
  4. Chengshu Li, Fei Xia, Roberto Martín-Martín, Michael Lingelbach, Sanjana Srivastava, Bokui Shen, Kent Vainio, Cem Gokmen, Gokul Dharan, Tanish Jain, Andrey Kurenkov, Karen Liu, Hyowon Gweon, Jiajun Wu, Li Fei-Fei, Silvio Savarese. iGibson 2.0: Object-Centric Simulation for Robot Learning of Everyday Household Tasks. arXiv:2108.03272, 2021.
  5. Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, Marco Tagliasacchi. SoundStream: An End-to-End Neural Audio Codec. arXiv:2107.03312, 2021.
  6. Maria Tsimpoukelli, Jacob Menick, Serkan Cabi, S. M. Ali Eslami, Oriol Vinyals, Felix Hill. Multimodal Few-Shot Learning with Frozen Language Models. arXiv:2106.13884, 2021.
  7. Diogo Almeida, Clemens Winter, Jie Tang, Wojciech Zaremba. A Generalizable Approach to Learning Optimizers. arXiv:2106.00958, 2021.
  8. Sainbayar Sukhbaatar, Da Ju, Spencer Poff, Stephen Roller, Arthur Szlam, Jason Weston, Angela Fan. Not All Memories are Created Equal: Learning to Forget by Expiring. arXiv:2105.06548, 2021.
  9. Brian Lester, Rami Al-Rfou, Noah Constant. The Power of Scale for Parameter-Efficient Prompt Tuning. arXiv:2104.08691, 2021.
  10. Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, Supasorn Suwajanakorn. NeX: Real-time View Synthesis with Neural Basis Expansion. arXiv:2103.05606, 2021.
  11. Alex Trevithick, Bo Yang. GRF: Learning a General Radiance Field for 3D Scene Representation and Rendering. arXiv:2010.04595, 2020.
  12. Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano. Learning to Summarize with Human Feedback. arXiv:2009.01325, 2020.
  13. Felipe Petroski Such, Aditya Rawal, Joel Lehman, Kenneth O. Stanley, Jeff Clune. Generative Teaching Networks: Accelerating Neural Architecture Search by Learning to Generate Synthetic Training Data. arXiv:1912.07768, 2019.