Oolit
Machine Learning backbone of CosmoTalker Orbitarium
Mission Brief
Oolit is a lightweight, specialized LoRA adapter fine-tuned on the distilgpt2 architecture. Designed to power the conversational AI capabilities of the CosmoTalker Orbitarium, it delivers efficient, accurate responses to astronomical queries while maintaining low inference latency.
Features
Compute Efficient
Optimized for consumer hardware. Runs smoothly on modest GPUs thanks to the distilled base model.
Curated Dataset
Trained on a hand-verified dataset of space, astronomy, and physics questions for educational accuracy.
Plug & Play
Designed as a LoRA adapter. Easily attachable to existing transformers pipelines using PEFT.
Downloads
| adapter_config.json SHA: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 | 485 B | LoRA Configuration | |
| adapter_model.bin SHA: a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z6 | 320 MB | Model Weights |
Integration
Load Oolit using Hugging Face transformers and peft libraries.
# Install requirements: pip install transformers peft torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load Base Model
config = PeftConfig.from_pretrained("bhuvanesh-m-dev/oolit")
model = AutoModelForCausalLM.from_pretrained("distilgpt2")
tokenizer = AutoTokenizer.from_pretrained("distilgpt2")
# Attach Oolit Adapter
model = PeftModel.from_pretrained(model, "bhuvanesh-m-dev/oolit")
# Inference
prompt = "Q: What is a black hole?\nA:"
inputs = tokenizer(prompt, return_tensors="pt")
output = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Prompt Template
Q: {question}\nA:
Limitations: As a distilled model, complex reasoning may be limited. Best suited for factual lookup and definitions within the astronomical domain.