QWEN3-8B Image/Video Caption (Uncensored)
Version 2 - This version is highly attuned to NSFW content. However do to image only training it may generate some video captions as image.
This version requires 24GB or more of VRAM
Full Finetune (NOT A LORA MERGE) of the 8B parameter model (Vision Frozen)
BF16/TF32 training unfortunately do to the size of the model Adam8bit needed to be used.
Version 2 Can use nearly any LLM prompt - Version 1 should use the prompt given in whole or in part.
Details regarding training of version 1 can be read about here.
Note: No image size safety is built in I have captioned 4k images which will be processed to a very large tensor shape - however reduction to 1k images is recommend
I have an Ampere series card and can not convert this to FP8 or NF4 in high quality. If you have experience converting models with Linux and Transformer Engine DM me.
Description
FAQ
Comments (7)
Awesome work! One of the unsung heroes of AI
Thank you
Hello master :-) , I wanted to ask if this will be available for other graphics cards like the 16GB ones, thank you for your work.
So I work with ampere series cards - To correctly use all the fancy FP8 compression tools from Nvidia without major work around you need a Ada or Blackwell card - You could try asking user https://huggingface.co/John6666 to convert the model to NF4 so all the folks with 8GB could use it also
@Felldude Thanks for replying, that would be great so we can all use it.
