diff --git a/docs/docs/faq.md b/docs/docs/faq.md
index 91c277cb66..8566b9dfe2 100644
--- a/docs/docs/faq.md
+++ b/docs/docs/faq.md
@@ -15,12 +15,12 @@ In this page, there are some of the most frequently asked questions.
-We have released candidate supervised finetuning (SFT) models using both Pythia
-and LLaMa, as well as candidate reward models for reinforcement learning from
-human feedback training using Pythia, which you can try, and are beginning the
-process of applying (RLHF). We have also released the first version of the
-OpenAssistant Conversations dataset
-[here](https://huggingface.co/datasets/OpenAssistant/oasst1).
+This project has concluded. We have released supervised finetuning (SFT) models
+using Llama 2, LLaMa, Falcon, Pythia, and StabeLM as well as reinforcement
+learning from human feedback trained models and reward models, all of which are
+available at [here](https://huggingface.co/OpenAssistant). In addition to our
+models, we have released three datasets from OpenAssistant conversations, and a
+[research paper](https://arxiv.org/abs/2304.07327).
@@ -31,9 +31,8 @@ OpenAssistant Conversations dataset
-You can play with our best candidate model
-[here](https://open-assistant.io/chat) and provide thumbs up/down responses to
-help us improve the model in future!
+Our online demonstration is no longer available, but the models remain available
+to download [here](https://huggingface.co/OpenAssistant).
@@ -44,37 +43,18 @@ help us improve the model in future!
-The candidate Pythia SFT models are
+All of our models are
[available on HuggingFace](https://huggingface.co/OpenAssistant) and can be
-loaded via the HuggingFace Transformers library. As such you may be able to use
-them with sufficient hardware. There are also spaces on HF which can be used to
-chat with the OA candidate without your own hardware. However, these models are
-not final and can produce poor or undesirable outputs.
+loaded via the HuggingFace Transformers library or other runners if converted.
+As such you may be able to use them with sufficient hardware. There are also
+spaces on HF which can be used to chat with the OA candidate without your own
+hardware. However, some of these models are not final and can produce poor or
+undesirable outputs.
-LLaMa SFT models cannot be released directly due to Meta's license but XOR
+LLaMa (v1) SFT models cannot be released directly due to Meta's license but XOR
weights are released on the HuggingFace org. Follow the process in the README
-there to obtain a full model from these XOR weights.
-
-
-
-
-
-
-### Is there an API available?
-
-
-
-There is no API currently available for Open Assistant. Any mention of an API in
-documentation is referencing the website's internal API. We understand that an
-API is a highly requested feature, but unfortunately, we can't provide one at
-this time due to a couple of reasons. Firstly, the inference system is already
-under high load and running off of compute from our sponsors. Secondly, the
-project's primary goal is currently data collection and model training, not
-providing a product.
-
-However, if you're looking to run inference, you can host the model yourself
-either on your own hardware or with a cloud provider. We appreciate your
-understanding and patience as we continue to develop this project.
+there to obtain a full model from these XOR weights. Llama 2 models are not
+required to be XORed.
@@ -102,15 +82,13 @@ inference setup and UI locally unless you wish to assist in development.
All Open Assistant code is licensed under Apache 2.0. This means it is available
for a wide range of uses including commercial use.
-The Open Assistant Pythia based models are released as full weights and will be
-licensed under the Apache 2.0 license.
-
-The Open Assistant LLaMa based models will be released only as delta weights
-meaning you will need the original LLaMa weights to use them, and the license
-restrictions will therefore be those placed on the LLaMa weights.
+Open Assistant models are released under the license of their respective base
+models, be that Llama 2, Falcon, Pythia, or StableLM. LLaMa (not 2) models are
+only released as XOR weights, meaning you will need the original LLaMa weights
+to use them.
-The Open Assistant data is released under a Creative Commons license allowing a
-wide range of uses including commercial use.
+The Open Assistant data is released under Apache-2.0 allowing a wide range of
+uses including commercial use.
@@ -138,9 +116,8 @@ you to everyone who has taken part!
-The model code, weights, and data are free. We are additionally hosting a free
-public instance of our best current model for as long as we can thanks to
-compute donation from Stability AI via LAION!
+The model code, weights, and data are free. Our free public instance of our best
+models is not longer available due to the project's conclusion.
@@ -151,10 +128,9 @@ compute donation from Stability AI via LAION!
-The current smallest (Pythia) model is 12B parameters and is challenging to run
-on consumer hardware, but can run on a single professional GPU. In future there
-may be smaller models and we hope to make progress on methods like integer
-quantisation which can help run the model on smaller hardware.
+The current smallest models are 7B parameters and are challenging to run on
+consumer hardware, but can run on a single professional GPU or be quantized to
+run on more widely available hardware.
@@ -165,13 +141,7 @@ quantisation which can help run the model on smaller hardware.
-If you want to help in the data collection for training the model, go to the
-website [https://open-assistant.io/](https://open-assistant.io/).
-
-If you want to contribute code, take a look at the
-[tasks in GitHub](https://github.com/orgs/LAION-AI/projects/3) and comment on an
-issue stating your wish to be assigned. You can also take a look at this
-[contributing guide](https://github.com/LAION-AI/Open-Assistant/blob/main/CONTRIBUTING.md).
+This project has now concluded.
@@ -190,104 +160,6 @@ well as accelerate, DeepSpeed, bitsandbytes, NLTK, and other libraries.
-## Questions about the data collection website
-
-
-
-
-### Can I use ChatGPT to help in training Open Assistant, for instance, by generating answers?
-
-
-
-No, it is against their terms of service to use it to help train other models.
-See
-[this issue](https://github.com/LAION-AI/Open-Assistant/issues/471#issuecomment-1374392299).
-ChatGPT-like answers will be removed.
-
-
-
-
-
-
-### What should I do if I don't know how to complete the task as an assistant?
-
-
-Skip it.
-
-
-
-
-
-### Should I fact check the answers by the assistant?
-
-
-
-Yes, you should try. If you are not sure, skip the task.
-
-
-
-
-
-
-### How can I see my score?
-
-
-
-In your [account settings](https://open-assistant.io/account).
-
-
-
-
-
-
-### Can we see how many data points have been collected?
-
-
-
-You can see a regularly updated interface at
-[https://open-assistant.io/stats](https://open-assistant.io/stats).
-
-
-
-
-
-
-### How do I write and label prompts?
-
-
-
-Check the
-[guidelines](https://projects.laion.ai/Open-Assistant/docs/guides/guidelines).
-
-
-
-
-
-
-### Where can I report a bug or create a new feature request?
-
-
-
-In the [GitHub issues](https://github.com/LAION-AI/Open-Assistant/issues).
-
-
-
-
-
-
-### Why am I not allowed to write about this topic, even though it isn't illegal?
-
-
-
-We want to ensure that the Open Assistant dataset is as accessible as possible.
-As such, it's necessary to avoid any harmful or offensive content that could be
-grounds for removal on sites such as Hugging Face. Likewise, we want the model
-to be trained to reject as few questions as possible, so it's important to not
-include prompts that leave the assistant with no other choice but to refuse in
-order to avoid the generation of harmful content.
-
-
-
## Questions about the development process
diff --git a/docs/docs/intro.md b/docs/docs/intro.md
index 326502bfe3..98f50762ca 100644
--- a/docs/docs/intro.md
+++ b/docs/docs/intro.md
@@ -1,3 +1,9 @@
+# Notice
+
+**Open Assistant has now concluded.** Please see
+[this video](https://www.youtube.com/watch?v=gqtmUHhaplo) for more information.
+Thanks you to all those who made this project possible.
+
# Introduction
> The FAQ page is available at