Topic Focus
Llms Interview Problems (35)
Practice llms questions asked in data science interviews
| Status | Title | Difficulty | |||
|---|---|---|---|---|---|
| BERT vs GPT Pre-training Pro | Easy | ||||
| Purpose of Positional Encoding Pro | Easy | ||||
| Temperature in Text Generation Free | Easy | ||||
| Top-k and Top-p Sampling Pro | Easy | ||||
| What Is a Token in LLMs? Free | Easy | ||||
| What Is BPE Tokenization? Pro | Easy | ||||
| What Is Model Hallucination? Pro | Easy | ||||
| What Is RAG? Pro | Easy | ||||
| Word2Vec vs Contextual Embeddings Pro | Easy | ||||
| Zero-Shot vs Few-Shot Learning Free | Easy | ||||
| Chain-of-Thought Prompting Pro | Medium | ||||
| Chunking Strategies for RAG Pro | Medium | ||||
| Context Window Limitations Pro | Medium | ||||
| DPO vs RLHF Pro | Medium | ||||
| Emergent Abilities in LLMs Pro | Medium | ||||
| Unlock all 35+ problems | |||||
| Instruction Tuning Pro | Medium | ||||
| LoRA Fine-Tuning Pro | Medium | ||||
| Model Distillation Pro | Medium | ||||
| Multi-Head Attention Pro | Medium | ||||
| Perplexity as an Evaluation Metric Pro | Medium | ||||
| Unlock all 35+ problems | |||||
| Prompt Engineering Techniques Pro | Medium | ||||
| Quantization for LLM Inference Pro | Medium | ||||
| RLHF Training Pipeline Pro | Medium | ||||
| Self-Attention Mechanism Pro | Medium | ||||
| Vector Databases for RAG Pro | Medium | ||||
| Unlock all 35+ problems | |||||