Post-Train Your Own Private AI: Step-by-Step DPO & QLoRA Guide
Building private AI doesn't have to be expensive or complicated. In this video, Mason Jung from Cloudera demonstrates how to post-train a language model directly on your own device.
What you will learn:
- How to use Direct Preference Optimization (DPO) and QLoRA for model adaptation.
- Practical, hands-on techniques to adapt an open-source model to your specific workflow.
- Methods for maintaining a private AI environment without high costs.
- Want to learn more about Cloudera AMPs? Check out Cloudera’s official website.* https://cloudera.github.io/Applied-ML-Prototypes/#/community
Join the Cloudera Community to learn more! 👉 https://community.cloudera.com
- Links & Resources*
- Check out the ASAP_DPO_Finetuning Repository: https://github.com/masonjung/ASAP_DPO_Finetuning
- Explore Cloudera AMPs: https://cloudera.github.io/Applied-ML-Prototypes/#/community
#Cloudera #GenerativeAI #DPO #FineTuning #AIResearch #PrivateAI #fyp #tech #short