This model is meant to be:
Artistic and elegant Drop-dead easy to work with Good at making cool characters and landscapes Not bound or leaning towards any single style Killer at digital and conventional art in many aesthetics And above all, fun
default CFG Scale : 7 ±5 default Sampler : DPM++ 2M Karras default Steps : 25
The character became easier and more stable Some pictures cleared Overall detail up
This model is intended to produce high-quality, highly detailed anime style pictures with just a few prompts. Like other anime-style Stable Diffusion models, it also supports danbooru tags, including artists, to generate images.
AbyssOrangeMix2 能够生成高质量、高度逼真的插图。它可以生成无法用手绘制的精细和详细的插图。它还可以用于多种用途,使其在设计和艺术品中非常有用。 此外,它提供了一种无与伦比的新表达方式。它可以生成各种类型的插图,以满足广泛的需求。我鼓励您使用“深渊”使您的设计和艺术品更丰富、质量更高。
Alfamix is great for creating book covers, album covers, intricate landscapes and rich art. It's also good for detailed portraits. You can create highly detailed images.
Stable Diffusion model trained using dreambooth to create pixel art, in 2 styles the sprite art can be used with the trigger word "pixelsprite" the scene art can be used with the trigger word "16bitscene"
In your prompt, use the activation token: analog style You may need to use the words blur haze naked in your negative prompts. My dataset did not include any NSFW material but the model seems to be pretty horny. Note that using blur and haze in your negative prompt can give a sharper image but also a less pronounced analog film effect.
该模型旨在通过几个提示生成高质量、高度详细的动漫风格。与其他动漫风格的 Stable Diffusion 模型一样,它也支持 danbooru 标签生成图像。
例如1girl, white hair, golden eyes, beautiful eyes, detail, flower meadow, cumulonimbus clouds, lighting, detailed sky, garden
This is the fine-tuned Stable Diffusion model trained on images from the TV Show Arcane. Use the tokens arcane style in your prompts for the effect.
This model was inspired by SamDoesSexy Blend. Influenced by: SDHero-Bimbo-Bondage, Pit Bimbo, Analog Diffusion, Dreamlike Diffusion, Redshift Diffusion. Core influence: MidJourney v4, Studio Ghibli, CopeSeetheMald v2, F222, SXD 0.8.
This model has been trained from runwayml/stable-diffusion-v1-5 for approximately 1.6 epochs on 1.2m images total from various Instagram accounts (primarily Japanese).
两个二次元模型
Asia SFW
It has been trained using Stable Diffusion 1.5 to generate cinematic images. To utilize it, you must include the keyword "syberart" at the beginning of your prompt.
ColorBomb : FaceBomb + vivid color and lighting. A bit picky of prompts.
More info : https://huggingface.co/mocker/KaBoom
The tokens for V2 are -
charliebo artstyle holliemengert artstyle marioalberti artstyle pepelarraz artstyle andreasrocha artstyle jamesdaly artstyle
https://huggingface.co/Conflictx/Complex-Lineart
Model was trained on 768x768 images, so keep the resolution at that when generating.
Prompt is ComplexLA style. I usually add "high resolution, very detailed, greeble, intricate" to the prompt as well.
Works great for large structures, scifi stations and anything imposing.
ControlNet is a neural network structure to control diffusion models by adding extra conditions.
Counterfeit is anime style Stable Diffusion model. DreamBooth + Merge Block Weights + Merge LoRA
a model that merges the following models by adjusting the weights of each layer in U-Net.
Counterfeit v2.5 Basil Mix Abyss Orange Mix v3.0 A2
This model provides you the ability to create anything you want. The more power of prompt knowledges you have, the better results you'll get. It basically means that you'll never get a perfect result with just a few words. You have to fill out your prompt line extremely detailed.
Alternative use of ClipSkip 1 or 2
A mixture of Dreamlike Diffusion 1.0, SamDoesArt V3 and Kuvshinov style models.
Created mostly for exploring different character concepts with a focus on drawings, but the mix happened to be pretty good at realistic-ish images, all thanks to wonderful models that it uses.
v3.1 is able to receive more diverse, more detailed prompts with gorgeous colors and more realistic shadows. The image has the breath of 3D anime, but the material is much more realistic. The weak point is that some celebrity images are no longer in the model, a bit too 3d anime might make some people dislike, the image of the teeth is a bit lacking in detail.
This mixed model is a combination of my all-time favorites AND new-found favorite, Dreamlike (thanks to JustMaier for the heads-up). A genuine simple mix of a very popular anime model and the powerful and Zeipher's fantastic f222 mixed in with Dreamlike (minus sd1.5).
Includes Noise Offset training Darker images Improved contrast Read more here - https://www.crosslabs.org/blog/diffusion-with-offset-noise Best for cinematic/dramatic generations. Is capable of very high fidelity generations
The model is also extremely versatile, it can do beautiful landscapes, excels at semi-realistic characters, can do anime styles if you prompt it to (you need to weight anime tags though), can do NSFW, works beautifully with embeddings and LORAs and also has the fantastic noise offset built in to produce rich dark beautiful images, if you ask it to!
Original Model Dpepteahand3. Finetuned on some Concept Artists.
merged a few checkpoints and got something buttery and amazing. Does great with things other then people too. It can do anything really. It doesn't need crazy prompts either. Keep it simple. No need for all the artist names and trending on whatever.
Europe and America NSFW
Generate beautiful Asian girls, NSFW.
The images on the following imgur link were all made just using 'knollingcase' as the main prompt
Extremely NSFW biased model!!! But awesome at SFW too. Use it at your own risk of getting unprompted explicit images.
This model features an artistic and matte paint illust.
This is the fine-tuned Stable Diffusion 1.5 model trained on screenshots from a popular animation studio. Use the tokens modern disney style in your prompts for the effect.
A fined-tuned stable diffusion model for generating Padorus.
Protogen was warm-started with Stable Diffusion v1-5 and fine-tuned on various high quality image datasets
The PVC figure style is closer to the anime style than to the realistic style. So, it is recommended to put anime to positive prompt or realistic to negative prompt to get better results sometimes. If you want to avoid too realistic faces, try this!
Original artwork and all credits go to Sam Yang (samdoesarts) 🎨
this model makes a colorful pictures. crisp sharp colors, character also looks very nice.
furry
a latent diffusion model that has been trained on a Japanese artist's artworks, yohan1754/Free Style.
This model can generate vtubers for Hololive and Nijisanji. Some vtubers may or may not come out well. It is recommended to give the name a weight of about 1.2 (e.g. (ange katrina:1.2))
This is a mix of some of my favorite models to create something that captures the aesthetic I like but also without sacrificing NSFW capabilities.
3K, tends to do more realism, and less fantasy/art. 3k3 is the most influenced by digital art and anime, etc. And 3k1 is my personal favourite balance.
This model was trained on nearly 10k high-quality public domain digital artworks with the goal of improving output quality across the board. We find the model to be highly flexible in its ability to mix various styles, subjects, and details. We recommend resolutions above 640px in one or both dimensions for best results.
Riffusion is an app for real-time music generation with stable diffusion.
a merge of anime diffusion models with the goal of producing vibrant and detailed, yet painterly images by maintaining a subtle impressionistic quality. Supports Danbooru tags within prompts.
This blend model aims to achieve versatility in generating images that are almost realistic but with a touch of fantasy and a polished aesthetic. It performs well with both illustrations and simulated photographs, but it particularly excels with the latter.
make good anime stuff out of the box, without needing an additional LoRA.
overfit merged lora model
MeinaMix objetive is to be able to do good art with little prompting.