Iterative Prompting with LLM to limit Hallucination and tackle Ambiguity
Experimental Proposal for COGS 153: Language Comprehension where I hypothesized LLMs can be improved by iteratively prompting it to check for contradictions and ambiguity in the question.