Leveraging LLMs for English Dependency Parsing
Created using ChatSlide
This coursework explores dependency parsing using large language models (LLMs), examining existing research and limitations of current methods. It analyses linguistic error patterns, compares closed vs open-weight LLMs, and evaluates models using specific datasets and metrics. Examples include GPT-4o's few-shot prompting performance and self-correction mechanisms. Concluding insights highlight findings, suggest hybrid parsing approaches, and discuss implications of the study.