Company: Oracle_MCQ_14nov
Difficulty: medium
What settings can be adjusted to minimize randomness in large language model responses for a chatbot? A lower temperature value Greedy sampling Top-p sampling Increasing context length For a sentiment analysis task with limited resources for fine-tuning, what technique allows a large language model to learn the new task without gradient updates? QLoRA K-shot instruction tuning Prompt adaptability with k-shot learning Parameter-efficient fine-tuning A Generative AI engineer is fine-tuning a large language model for a sentiment classification task. It is observed that the gradient computed on the model becomes too small after only a few iterations, and it does not reduce the training loss. What activation functions are used in hidden layers of large language models? ReLU GELU SiLU Sigmoid A Translation Lookaside Buffer (TLB) has an access time of 5ns and a hit ratio of 80%. Main memory takes 8 times longer to access than the TLB. On a TLB miss, a main memory access is required for the pa