Skip to content

Commit

Permalink
periodic update
Browse files Browse the repository at this point in the history
  • Loading branch information
marcofavorito committed Sep 24, 2023
1 parent 937b1e1 commit 4a86f59
Show file tree
Hide file tree
Showing 30 changed files with 596 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
abstract: We present an approach to modeling an image-space prior on scene dynamics.
Our prior is learned from a collection of motion trajectories extracted from real
video sequences containing natural, oscillating motion such as trees, flowers, candles,
and clothes blowing in the wind. Given a single image, our trained model uses a
frequency-coordinated diffusion sampling process to predict a per-pixel long-term
motion representation in the Fourier domain, which we call a neural stochastic motion
texture. This representation can be converted into dense motion trajectories that
span an entire video. Along with an image-based rendering module, these trajectories
can be used for a number of downstream applications, such as turning still images
into seamlessly looping dynamic videos, or allowing users to realistically interact
with objects in real pictures.
archiveprefix: arXiv
author: Li, Zhengqi and Tucker, Richard and Snavely, Noah and Holynski, Aleksander
author_list:
- family: Li
given: Zhengqi
- family: Tucker
given: Richard
- family: Snavely
given: Noah
- family: Holynski
given: Aleksander
eprint: 2309.07906v1
file: 2309.07906v1.pdf
files:
- li-zhengqi-and-tucker-richard-and-snavely-noah-and-holynski-aleksandergenerative-image-dynamics2023.pdf
month: Sep
primaryclass: cs.CV
ref: 2309.07906v1
time-added: 2023-09-21-10:22:52
title: Generative Image Dynamics
type: article
url: http://arxiv.org/abs/2309.07906v1
year: '2023'
11 changes: 11 additions & 0 deletions bookshelf/papers/0c32fb3c9790514c9f5c519107798b5b/info.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
language: en
month: August
ref: noauthor_transformer:_2017
shorttitle: Transformer
time-added: 2023-09-21-16:10:31
title: 'Transformer: {A} {Novel} {Neural} {Network} {Architecture} for {Language}
{Understanding}'
type: misc
url: https://blog.research.google/2017/08/transformer-novel-neural-network.html
urldate: '2023-09-21'
year: '2017'
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
abstract: The fixed-size context of Transformer makes GPT models incapable of generating
arbitrarily long text. In this paper, we introduce RecurrentGPT, a language-based
simulacrum of the recurrence mechanism in RNNs. RecurrentGPT is built upon a large
language model (LLM) such as ChatGPT and uses natural language to simulate the Long
Short-Term Memory mechanism in an LSTM. At each timestep, RecurrentGPT generates
a paragraph of text and updates its language-based long-short term memory stored
on the hard drive and the prompt, respectively. This recurrence mechanism enables
RecurrentGPT to generate texts of arbitrary length without forgetting. Since human
users can easily observe and edit the natural language memories, RecurrentGPT is
interpretable and enables interactive generation of long text. RecurrentGPT is an
initial step towards next-generation computer-assisted writing systems beyond local
editing suggestions. In addition to producing AI-generated content (AIGC), we also
demonstrate the possibility of using RecurrentGPT as an interactive fiction that
directly interacts with consumers. We call this usage of generative models by ``AI
As Contents'' (AIAC), which we believe is the next form of conventional AIGC. We
further demonstrate the possibility of using RecurrentGPT to create personalized
interactive fiction that directly interacts with readers instead of interacting
with writers. More broadly, RecurrentGPT demonstrates the utility of borrowing ideas
from popular model designs in cognitive science and deep learning for prompting
LLMs. Our code is available at https://github.com/aiwaves-cn/RecurrentGPT and an
online demo is available at https://www.aiwaves.org/recurrentgpt.
archiveprefix: arXiv
author: Zhou, Wangchunshu and Jiang, Yuchen Eleanor and Cui, Peng and Wang, Tiannan
and Xiao, Zhenxin and Hou, Yifan and Cotterell, Ryan and Sachan, Mrinmaya
author_list:
- family: Zhou
given: Wangchunshu
- family: Jiang
given: Yuchen Eleanor
- family: Cui
given: Peng
- family: Wang
given: Tiannan
- family: Xiao
given: Zhenxin
- family: Hou
given: Yifan
- family: Cotterell
given: Ryan
- family: Sachan
given: Mrinmaya
eprint: 2305.13304v1
file: 2305.13304v1.pdf
files:
- zhou-wangchunshu-and-jiang-yuchen-eleanor-and-cui-peng-and-wang-tiannan-and-xiao-zhenxin-and-hou-yifan-and-cotterell-ryan-and-sachan-mrinmayar.pdf
month: May
primaryclass: cs.CL
ref: 2305.13304v1
time-added: 2023-09-21-09:11:38
title: 'RecurrentGPT: Interactive Generation of (Arbitrarily) Long Text'
type: article
url: http://arxiv.org/abs/2305.13304v1
year: '2023'
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
author: Penrose, Roger
author_list:
- family: Penrose
given: Roger
files:
- penrose-rogercycles-of-time-an-extraordinary-new-view-of-the-universe2010.pdf
publisher: Random House
ref: penrose2010cycles
time-added: 2023-09-21-19:31:35
title: 'Cycles of time: an extraordinary new view of the universe'
type: book
year: '2010'
12 changes: 12 additions & 0 deletions bookshelf/papers/18de98e2a35851f416d675fadc6d15bb/info.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
abstract: The Signal Protocol is a set of cryptographic specifications that provides
end-to-end encryption for private communications exchanged daily by billions of
people around the world. After its publication in 2013, the Signal Protocol was
adopted not only by Signal but well beyond. Technical informat...
journal: Signal Messenger
language: en
ref: noauthor_quantum_nodate
time-added: 2023-09-20-08:22:14
title: Quantum {Resistance} and the {Signal} {Protocol}
type: misc
url: https://signal.org/blog/pqxdh/
urldate: '2023-09-20'
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
abstract: The near infinite ramifications of having a uterus
author: Pueyo, Tomas
author_list:
- family: Pueyo
given: Tomas
language: en
ref: pueyo_what_nodate
time-added: 2023-09-20-09:02:58
title: What {Makes} {Men} and {Women} {Different}?
type: misc
url: https://unchartedterritories.tomaspueyo.com/p/what-makes-men-and-women-different
urldate: '2023-09-20'
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
author: Contreras, Gilberto and Martonosi, Margaret
author_list:
- family: Contreras
given: Gilberto
- family: Martonosi
given: Margaret
booktitle: 2008 IEEE International Symposium on Workload Characterization
files:
- contreras-gilberto-and-martonosi-margaretcharacterizing-and-improving-the-performance-of-intel-threading-building-blocks2008.pdf
organization: IEEE
pages: 57--66
ref: contreras2008characterizing
time-added: 2023-09-19-16:39:44
title: Characterizing and improving the performance of intel threading building blocks
type: inproceedings
year: '2008'
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
abstract: The growing size of data center and HPC networks pose unprecedented requirements
on the scalability of simulation infrastructure. The ability to simulate such large-scale
interconnects on a simple PC would facilitate research efforts. Unfortunately, as
we first show in this work, existing shared-memory packet-level simulators do not
scale to the sizes of the largest networks considered today. We then illustrate
a feasibility analysis and a set of enhancements that enable a simple packet-level
htsim simulator to scale to the unprecedented simulation sizes on a single PC. Our
code is available online and can be used to design novel schemes in the coming era
of omnipresent data centers and HPC clusters.
archiveprefix: arXiv
author: Besta, Maciej and Schneider, Marcel and Girolamo, Salvatore Di and Singla,
Ankit and Hoefler, Torsten
author_list:
- family: Besta
given: Maciej
- family: Schneider
given: Marcel
- family: Girolamo
given: Salvatore Di
- family: Singla
given: Ankit
- family: Hoefler
given: Torsten
eprint: 2105.12663v1
file: 2105.12663v1.pdf
files:
- besta-maciej-and-schneider-marcel-and-girolamo-salvatore-di-and-singla-ankit-and-hoefler-torstentowards-million-server-network-simulations-on-jus.pdf
month: May
primaryclass: cs.NI
ref: 2105.12663v1
time-added: 2023-09-19-11:39:50
title: Towards Million-Server Network Simulations on Just a Laptop
type: article
url: http://arxiv.org/abs/2105.12663v1
year: '2021'
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
author: Badiru, Adedeji B and Asaolu, Olumuyiwa
author_list:
- family: Badiru
given: Adedeji B
- family: Asaolu
given: Olumuyiwa
files:
- badiru-adedeji-b-and-asaolu-olumuyiwahandbook-of-mathematical-and-digital-engineering-foundations-for-artificial-intelligence-a-systems-methodology.pdf
publisher: CRC Press
ref: badiru2023handbook
time-added: 2023-09-23-19:02:20
title: 'Handbook of Mathematical and Digital Engineering Foundations for Artificial
Intelligence: A Systems Methodology'
type: book
year: '2023'
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
author: Pedro, Domingos and Lowd, D
author_list:
- family: Pedro
given: Domingos
- family: Lowd
given: D
files:
- pedro-domingos-and-lowd-dmarkov-logic-an-interface-layer-for-artificial-intelligence2009.pdf
journal: Morgan \& Claypool
ref: pedro2009markov
time-added: 2023-09-23-19:01:55
title: 'Markov Logic: An Interface Layer for Artificial Intelligence'
type: article
year: '2009'
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
author: Bierlaire, Michel
author_list:
- family: Bierlaire
given: Michel
files:
- bierlaire-micheloptimization-principles-and-algorithms2015.pdf
number: BOOK
publisher: EPFL Press
ref: bierlaire2015optimization
time-added: 2023-09-23-19:03:56
title: 'Optimization: principles and algorithms'
type: book
year: '2015'
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
author: Croll, Angus
author_list:
- family: Croll
given: Angus
files:
- croll-angusif-hemingway-wrote-javascript2014.pdf
publisher: No Starch Press
ref: croll2014if
time-added: 2023-09-19-16:35:30
title: If Hemingway wrote javascript
type: book
year: '2014'
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
author: Cassioli, Andrea
author_list:
- family: Cassioli
given: Andrea
files:
- cassioli-andreaa-tutorial-on-black-box-optimization2013.pdf
publisher: 'Available also from: https://www. lix. polytechnique. fr/\~{} dambrosio~…'
ref: cassioli2013tutorial
time-added: 2023-09-22-15:37:41
title: A tutorial on black--box optimization
type: article
year: '2013'
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
abstract: Optimization is ubiquitous. While derivative-based algorithms have been
powerful tools for various problems, the absence of gradient imposes challenges
on many real-world applications. In this work, we propose Optimization by PROmpting
(OPRO), a simple and effective approach to leverage large language models (LLMs)
as optimizers, where the optimization task is described in natural language. In
each optimization step, the LLM generates new solutions from the prompt that contains
previously generated solutions with their values, then the new solutions are evaluated
and added to the prompt for the next optimization step. We first showcase OPRO on
linear regression and traveling salesman problems, then move on to prompt optimization
where the goal is to find instructions that maximize the task accuracy. With a variety
of LLMs, we demonstrate that the best prompts optimized by OPRO outperform human-designed
prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
archiveprefix: arXiv
author: Yang, Chengrun and Wang, Xuezhi and Lu, Yifeng and Liu, Hanxiao and Le, Quoc
V. and Zhou, Denny and Chen, Xinyun
author_list:
- family: Yang
given: Chengrun
- family: Wang
given: Xuezhi
- family: Lu
given: Yifeng
- family: Liu
given: Hanxiao
- family: Le
given: Quoc V.
- family: Zhou
given: Denny
- family: Chen
given: Xinyun
eprint: 2309.03409v1
file: 2309.03409v1.pdf
files:
- yang-chengrun-and-wang-xuezhi-and-lu-yifeng-and-liu-hanxiao-and-le-quoc-v.-and-zhou-denny-and-chen-xinyunlarge-language-models-as-optimizers202.pdf
month: Sep
primaryclass: cs.LG
ref: 2309.03409v1
time-added: 2023-09-18-09:05:38
title: Large Language Models as Optimizers
type: article
url: http://arxiv.org/abs/2309.03409v1
year: '2023'
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
abstract: 'In the 21st century, many of the crucial scientific and technical issues
facing humanity can be understood as problems associated with understanding, modelling,
and ultimately controlling complex systems: systems comprised of a large number
of non-trivially interacting components whose collective behaviour can be difficult
to predict. Information theory, a branch of mathematics historically associated
with questions about encoding and decoding messages, has emerged as something of
a lingua franca for those studying complex systems, far exceeding its original narrow
domain of communication systems engineering. In the context of complexity science,
information theory provides a set of tools which allow researchers to uncover the
statistical and effective dependencies between interacting components; relationships
between systems and their environment; mereological whole-part relationships; and
is sensitive to non-linearities missed by commonly parametric statistical models. In
this review, we aim to provide an accessible introduction to the core of modern
information theory, aimed specifically at aspiring (and established) complex systems
scientists. This includes standard measures, such as Shannon entropy, relative entropy,
and mutual information, before building to more advanced topics, including: information
dynamics, measures of statistical complexity, information decomposition, and effective
network inference. In addition to detailing the formal definitions, in this review
we make an effort to discuss how information theory can be interpreted and develop
the intuition behind abstract concepts like "entropy," in the hope that this will
enable interested readers to understand what information is, and how it is used,
at a more fundamental level.'
archiveprefix: arXiv
author: Varley, Thomas F.
author_list:
- family: Varley
given: Thomas F.
eprint: 2304.12482v2
file: 2304.12482v2.pdf
files:
- varley-thomas-f.information-theory-for-complex-systems-scientists2023.pdf
month: Apr
primaryclass: cs.IT
ref: 2304.12482v2
time-added: 2023-09-18-17:06:01
title: Information Theory for Complex Systems Scientists
type: article
url: http://arxiv.org/abs/2304.12482v2
year: '2023'
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ author: Edelkamp, Stefan
author_list:
- family: Edelkamp
given: Stefan
files:
- edelkamp-stefanalgorithmic-intelligence-towards-an-algorithmic-foundation-for-artificial-intelligence2023.pdf
publisher: Springer
ref: edelkamp2023algorithmic
time-added: 2023-06-12-10:49:51
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
author: Vieira, Tim
author_list:
- family: Vieira
given: Tim
date: '2023'
ref: noauthor_archive_nodate
time-added: 2023-09-22-15:39:48
title: Archive — {Graduate} {Descent}
type: misc
url: https://timvieira.github.io/blog/
urldate: '2023-09-22'
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
author: Pettit, Philip
author_list:
- family: Pettit
given: Philip
files:
- pettit-philipa-theory-of-freedom-from-the-psychology-to-the-politics-of-agency2001.pdf
publisher: Oxford University Press, USA
ref: pettit2001theory
time-added: 2023-09-23-19:03:25
title: 'A theory of freedom: from the psychology to the politics of agency'
type: book
year: '2001'
Loading

0 comments on commit 4a86f59

Please sign in to comment.