We evaluate our general-purpose permutations within a fine-tuning schema for the downstream task of subjectivity analysis. The associated optimization results in permuted sentences that are provably (approximately) optimal with respect to minimizing dependency parse lengths and that are demonstrably simpler. To help mitigate this, we consider a dependency parse as a proxy for the inter-word dependencies in a sentence and simplify the sentence with respect to combinatorial objectives imposed on the sentence-parse pair. However, there is no guarantee that the standard ordering of words is computationally efficient or optimal. Neural models at the sentence level often operate on the constituent words/tokens in a way that encodes the inductive bias of processing the input in a similar fashion to how humans do. Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research WorkshopĪssociation for Computational Linguistics Long-Distance Dependencies Don’t Have to Be Long: Simplifying through Provably (Approximately) Optimal Permutations Cite (Informal): Long-Distance Dependencies Don’t Have to Be Long: Simplifying through Provably (Approximately) Optimal Permutations (Bommasani, ACL 2019) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: = "Long-Distance Dependencies Don reduction in classification error (relative) over the previous state of the art.", Association for Computational Linguistics. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 89–99, Florence, Italy. Long-Distance Dependencies Don’t Have to Be Long: Simplifying through Provably (Approximately) Optimal Permutations. Anthology ID: P19-2012 Volume: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop Month: July Year: 2019 Address: Florence, Italy Venue: ACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 89–99 Language: URL: DOI: 10.18653/v1/P19-2012 Bibkey: bommasani-2019-long Cite (ACL): Rishi Bommasani. Our fine-tuned baselines reflect a new state of the art for the SUBJ dataset and the permutations we introduce lead to further improvements with a 2.0% increase in classification accuracy (absolute) and a 45% reduction in classification error (relative) over the previous state of the art. Abstract Neural models at the sentence level often operate on the constituent words/tokens in a way that encodes the inductive bias of processing the input in a similar fashion to how humans do.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |