- Home
- CFP
- Shared Tasks
- Organizers
- Committee
- Invited Speakers
- Schedule
- Sponsors
Motivation and Aim
In the past couple of decades, the Indian NLP and Speech Technology community has shown an ever increasing interest in the development of Language Resources for Indian Languages. This has primarily been due to the fact that as the community grew, increasing research in and development of Language Technology brought out the acute awareness of a serious lack of appropriate resources across the languages of India. A number of initiatives have been taken to address this issue, by the Government of India as well as academia and the industry. Many of these initiatives have targeted specific NLP and Speech technologies, inculcating collaborations between several academic institutions across the country, and active involvement of industry partners. As expected, when a number of resources are simultaneously being developed by several research groups across many languages, the need for standards also takes on some urgency. In the past few years years, the Govt. of India, in consultation with the experts from academia and industry have taken lead in developing appropriate standards for NLP resources. This concentrated effort has resulted in a number resources, standards, tools and technologies becoming available for many Indian languages in the past few years. While the activity in the Indian Language community may still not be comparable to for example, the work done on European languages, we firmly believe that the community has come of age and is at a point where sharing of ideas and experience is necessary, not only within the community but with other communities working in similar situations, so that India can move forward in planning for the future language technology resources and requirement while maintaining its linguistic diversity.
India has 4 language families – Indo Aryan (76.87 % speakers), Dravidian (20.82 % speakers) being the major ones. These families have contributed 22 constitutionally recognized (‘scheduled’ or ‘national’) languages out of which Hindi has the ‘official’ status in addition to having the ‘national’ status. Besides these, India has 234 mother tongues reported by the recent census (2001), and many more (more than 1600) languages and dialects. Of the major Indian languages, Hindi is spoken in 10 (out of a total of 25) states of India with a total population of over 60 % followed by Telugu and Bangla. There are more than 18 scripts in India which need to be standardized and supported by technology. Devanagari is the largest script being used by more than 6 languages.
Indian languages are under the exclusive control of respective states they are spoken in. Therefore every state may decide on measures to promote its language. However, since these 22 languages are national (constituent) languages, the center (union of India) also has responsibility towards each of them, though it has certain additional responsibility towards Hindi which is national as well official language of the Indian union. From time to time, minor/neglected languages claim constituent status. The situation becomes more complex when such a language becomes the rallying point for the demand for a new state or autonomous region.
This complex linguistic scene in India is a source of tremendous pressure on the Indian government to not only have comprehensive language policies, but also to create resources for their maintenance and development. In the age of information technology, there is a greater need to have a fine balance between allocation of resources to each language keeping in view the political compulsions, electoral potential of a linguistic community and other issues.
Language promotion and maintenance by the Ministry of Human Resource Development
The MHRD through its language agency called CIIL and many academic institutions across the country has set up a Linguistic Data Consortium for Indian Languages (LDCIL). This consortium, being set up in the lines of the LDC at the University of Pennsylvania (USA), will not only create and manage large Indian languages databases, it will also provide a forum for researchers in India and other countries working on Indian languages to publish and build products for use based on such databases that would not otherwise be possible.
LDC-IL is expected to:
-
Become a repository of linguistic resources in all Indian languages in the form of text, speech and lexical corpora.
-
Facilitate creation of such databases by different organizations which could contribute and enrich the main LDC-IL repository.
-
Set appropriate standards for data collection and storage of corpora for different research and development activities.
-
Support language technology development and sharing of tools for language-related data collection and management.
-
Facilitate training and manpower development in these areas through workshops, seminars etc. in technical as well as process related issues.
-
Create and maintain the LDC-IL web-based services that would be the primary gateway for accessing its resources.
-
Design or provide help in creation of appropriate language technology based on the linguistic data for mass use and
-
Provide the necessary linkages between academic institutions, individual researchers and the masses.
The Technology Development for Indian Languages (TDIL) program of the Ministry of Communications and IT (MCIT)
The MCIT started a program called TDIL in 1991 for building technology solutions for Indian languages. The stated objective of the TDIL is
(i) to develop information processing tools and techniques,
(ii) to facilitate human-machine interaction without language barrier,
(iii) to create and access multilingual knowledge resources and integrate them to develop innovative user products and services.
The TDIL has made available in the public domain many basic software tools and fonts for 22 Indian languages. On the language resources funds, TDIL is running several language corpora projects in consortium mode. Some of the significant projects are:
• Development of LRs for English to Indian Languages Machine Translation (MT) System,
• Development of LRs Indian Language to Indian Language Machine Translation System
• Development of LRS Sanskrit-Hindi Machine Translation
• Development of LRs for Robust Document Analysis & Recognition System for Indian Languages
• Development of LRs for On-line handwriting recognition system
• Development of LRs Cross-lingual Information Access
• Development of Speech Corpora/Technologies
• Parallel Language Corpora development in all 22 national languages (ILCI)
Apart from the consortium-based efforts, there have been several specific institution/organization based efforts in developing standard resources for Indian Languages. Some prominent efforts include The Hindi Wordnet developed at IIT-Bombay, POS-Tagged Corpora developed in Bangla, Hindi and Sanskrit by Microsoft Research India in collaboration with Jawaharlal Nehru University, New Delhi.
Given the amount of activity in the area of Language Technology Resources at the government, Institution, as well as individual researcher level, we organized the First Workshop in Istanbul in 2012. The workshop was a huge success in terms of large participation and number of submissions. For the half day workshop, we selected 8 full papers and 18 posters. The workshop featured three distinguished speakers in the inaugural session - Mrs. Swarn Lata (Head, TDIL, Dept of IT, Govt of India), Khalid Choukri, ELDA CEO, Prof. Pushpak Bhattacharyya, IIT Bombay. The workshop also featured a panel discussion on India and Europe - making a common cause in LTRs in which seven distinguished panelists participated - Kahlid Choukri, Joseph Mariani, Pushpak Bhattacharya, Swaran Lata, Monojit Choudhury, Zygmunt Vetulani, Dafydd Gibbon. The valedictory address was given by Nicoletta Calzolari, Director ILC-CNR, Italy.
The 2nd Workshop for Indian Language Resources and Evaluation was organized on 27 May 2014, Harpa Conference Centre, Reykjavik, Iceland. The workshop was a big success with 7 full papers and 11 posters/demo selected for presentation in the half day workshop. Workshop featured prominent speakers like the inaugural address by Nicoletta Calzolari and keynote by Dafydd Gibbon. The panel discussion on “India and Europe - making a common cause in LTRs” was coordinated by Hans Uszkoreit and included among panelists the scholars like Joseph Mariani, Shyam Aggarwal, Zygmunt Vetulani, Dafydd Gibbon and Panchanan Mohanty. The second workhop was remarkable on another count. It saw a collaboration emerging between Indian and European partners on two platforms – the IMAGACT and the TypeCraft which led to joint poster presentations by the researchers from India and Europe. The seminar ended by valedictory address by Mrs Swaran Lata, head of the TDIL program of government of India.
The 3rd Workshop for Indian Language Resources and Evaluation was organized on 24 May 2016, Grand Hotel Bernardin Conference Center, Portorož, Slovenia. The workshop was a big success with 7 full papers 5 short papers and 11 poster/demo selected for presentation in the half day workshop. Workshop featured prominent speakers like the inaugural address and keynote by Nicoletta Calzolari. The panel discussion on "Structured Language Resources (SLRs) in India and Europe - avenues for closer collaboration" was coordinated by Jan Hajik and included among panelists the scholars like Joseph Mariani, Zygmunt Vetulani, Jalpa Zaladi and Sunayana Sitaram.The seminar ended by valedictory address by Zygmunt Vetulani, Adam Mickiewicz University, Poznan, Poland.
The 4th Workshop for Indian Language Resources and Evaluation was organized on 12 May 2018, Phoenix Seagaia Resort, Miyazaki (Japan). The workshop was a big success with 2 full papers 3 short papers and 10 poster/demo selected for presentation in the half day workshop. Workshop featured prominent speakers like the inaugural address and keynote by Khalid Choukri (ELRA, France) and Chris Cierri (LDC, Philadelphia, USA) respectively. The panel discussion on " Language Technology Resource – Exploring new frontiers of collaborative R & D" was coordinated by Zygmunt Vetulani (Adam Mickiewicz University, Poland) and included among panelists the scholars like Dan Van Esch (Google), Kalika Bali (Microsoft Research India), Alessandro Panunzi (University of Florence, Italy).The seminar ended by valedictory address by Joseph Marianni (LIMSI-CNRS, Paris).
The 5th Workshop for Indian Language Resources and Evaluation was organized on 24th May 2020 (Online). The workshop was a big success with 5 papers and 7 posters selected for presentation in the half-day workshop. The workshop featured prominent speakers like the inaugural address and keynote by M Jagdish Kumar (VC, JNU) and Anoop Kunchukuttan (Microsoft) respectively. The panel discussion on "New directions for Indian language technology resources" was coordinated by Kalika Bali (Microsoft Research India) and included among panelists the scholars like Monojit Choudhary (Microsoft Research India), Pushpak Bhattacharya (IIT Bombay/Patna), Dafydd Gibbon (Universität Bielefeld, Germany), SS Aggarwal (KIIT), Zygmunt Vetulani (Adam Mickiewicz University, Poland), Patrick Paroubek (LIMSI-CNRS, France), Vijay Kumar (TDIL, Govt of India). The seminar ended with a valedictory address by Panchanan Mohanty, (GLA, Mathura).
Broader objectives of WILDRE-6 will be
- To map the status of Indian Language Resources
- To investigate challenges related to creating and sharing various levels of language resources
- To promote a dialogue between language resource developers and users
- To provide opportunity for researchers from India to collaborate with researchers from other parts of the world
Description of the Topic
WILDRE-6 will have a special focus on Demos of Indian Language Technology. In the past few years, as more resources have been developed and made available, there has been an increased activity in developing usable technology using these. WILDRE-6 would therefore like to encourage and widen the Demo track to allow the community to showcase their demos and have mutually beneficial interactions with each other as well as resource developers.
WILDRE will invite technical, policy and position paper submissions on the following topics related to Indian Language Resources:
- Digital Humanities, heritage computing
- Corpora - text, speech, multimodal, methodologies, annotation and tools
- Lexicons and Machine-readable dictionaries
- Ontologies
- Grammars
- Language resources for basic NLP, IR and Speech Technology tasks, tools and Infrastructure for constructing and sharing language resources
- Standards or specifications for language resources applications
- Licensing and copyright issues
Shared Task
Following the success of the five WILDRE workshops, WILDRE-6 will include shared tasks for Indian languages. The organizers of shared tasks will provide datasets and evaluation platforms to evaluate systems developed by the participants. Further details will be avalibile soon at WILDRE-6 website.
Both submission and review processes handled electronically. The review process will be double-blind.
WILDRE6- Workshop on Indian Language Data: Resources and Evaluation
6th WORKSHOP ON INDIAN LANGUAGE DATA: RESOURCES AND EVALUATION (WILDRE)
Date: Monday, 20th June 2022
Venue: Le Palais du Pharo, Marseille (France)
(Organized under the platform of LREC 2022 (20-25 June 2022))
Website:
Main website - http://sanskrit.jnu.ac.in/conf/wildre6
Submit papers on - https://www.softconf.com/lrec2022/WILDRE-6
LREC website - http://lrec2022.lrec-conf.org/en/
WILDRE – the 6th workshop on Indian Language Data: Resources and Evaluation is being organized in Marseille (France) on 20th June 2022 under the LREC platform. India has a huge linguistic diversity and has seen concerted efforts from the Indian government and industry towards developing language resources. European Language Resource Association (ELRA) and its associate organizations have been very active and successful in addressing the challenges and opportunities related to language resource creation and evaluation. It is therefore a great opportunity for resource creators of Indian languages to showcase their work on this platform and also to interact and learn from those involved in similar initiatives all over the world. The broader objectives of the WILDRE will be
- To map the status of Indian Language Resources
- To investigate challenges related to creating and sharing various levels of language resources
- To promote a dialogue between language resource developers and users
- To provide opportunity for researchers from India to collaborate with researchers from other parts of the world
DATES
April 08, 2022 Paper submissions due
May 09, 2022 Paper notification of acceptance
May 23, 2022 Camera-ready papers due
June 20, 2022 Workshop
SUBMISSIONS
Papers must describe original, completed or in progress, and unpublished work. Each submission will be reviewed by three program committee members.
Accepted papers will be given up to 10 pages (for full papers) 5 pages (for short papers and posters) in the workshop proceedings, and will be presented oral presentation or poster.
Papers should be formatted according to the style-sheet, which is provided on the LREC 2022 website (https://lrec2022.lrec-conf.org/en/submission2022/authors-kit/). Paper should be completely anonymised and anything pointing to the author(s) of the paper should be completely removed. Papers should be submitted in PDF format to the LREC website.
We are seeking submissions under the following category
Full papers (10 pages)
Short papers (work in progress – 5 pages)
Posters (innovative ideas/proposals, research proposal of students - 1 page poster sample)
Demo (of working online/standalone systems)
WILDRE-6 will have a special focus on Demos of Indian Language Technology. In the past few years, as more resources have been developed and made available, there has been an increased activity in developing usable technology using these. WILDRE-6 would like to encourage and widen the Demo track to allow the community to showcase their demos and have mutually beneficial interactions with each other as well as resource developers.
WILDRE-6 will invite technical, policy and position paper submissions on the following topics related to Indian Language Resources:
Digital Humanities, heritage computing
Corpora - text, speech, multimodal, methodologies, annotation and tools
Lexicons and Machine-readable dictionaries
Ontologies
Grammars
Language resources for basic NLP, IR and Speech Technology tasks, tools and Infrastructure for constructing and sharing language resources
Standards or specifications for language resources applications
Licensing and copyright issues
Both submission and review processes handled electronically. The review process will be double-blind. The workshop website will provide the submission guidelines and the link for the electronic submission.
When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC authors to share the described LRs (data, tools, services, etc.), to enable their reuse, replicability of experiments, including evaluation ones, etc.
For further information on this initiative, please refer to http://lrec2022.lrec-conf.org/en/
Contact:
Atul Kr. Ojha, National University of Ireland, Galway, Ireland & Panlingua Language Processing LLP, India shashwatup9k@gmail.com
Sixth Workshop on Indian Language Data: Resources and Evaluation (WILDRE-6) Shared Tasks
The Sixth Workshop on Indian Language Data: Resources and Evaluation (WILDRE-6) at LREC-2022 will include two shared tasks on- Universal Dependency based Morpho-Syntactic Parsing in Indian Languages (UDParse-IL) and Speech Technologies for Under-resourced Indian Languages (SpeechTech-IL).
(a) Universal Dependency based Morpho-Syntactic Parsing in Indian Languages (UDParse-IL)
The primary objective of the UDParse-IL task is to find notable techniques for developing universal dependency parsers, especially when a language is low-resourced. In this task, the participants will be provided with training, development and testing datasets annotated with dependency relations in 10 Indian Languages - Bhojpuri, Hindi (including Hindi-English code switched), Marathi, Sanskrit, Tamil, Telugu, Urdu, Punjabi, and Magahi - and we will solicit participants to submit systems based on novel zero/few-shot (or other cross-lingual and multilingual) similar methods for these low-resource Indian languages. All the languages included in this task, with the exception of Hindi and Urdu, don’t have more than 1,350 annotated sentences. The data of the first nine languages mentioned above will be shared by UFAL, Charles University from the Universal Dependencies (UD) repositories. We will provide test data and an evaluation platform to evaluate the participant's developed parsers. The parsers will be evaluated using LAS, UAS, precision, recall and F-score. One of the primary goals of the task is to ascertain the effectiveness of the implemented methods for unseen but closely-related languages, in addition to the languages for which the training dataset is being provided. In order to do this, the test data will include some surprise languages - the names of these surprise/unseen test languages will be revealed at the test time itself and a test set for these languages will be provided.
(b) Speech Technologies for Under-resourced Indian Languages (SpeechTech-IL)
Neural or deep learning techniques are currently being applied in state-of-the-art automated systems that report significant performance improvements, but typically require a large amount of high-quality data. However, in order to advance Automatic Speech Recognition (ASR) and Text-to-Speech (TTS) systems for low resource languages, the zero-shot/unsupervised approach is one notable development in Neural learning that builds ASR/TTS systems for languages where the size of audio and/or transcribed speech data may be small or even non-existent. In this shared task, we will solicit participants to submit novel zero-shot (or similar methods) and/or linguistically-encoded features systems for under-resourced Indian languages. The goal will be to ascertain the effectiveness of the method implemented for language pairs as well as for unseen similar languages. The languages are Hindi, Odia, Marathi and Sanskrit. In order to do this, the test data will include three surprise languages - the names of these surprise/unseen test languages will be revealed at the test time itself and a test set for these languages will be provided. The system(s) will be evaluated using WER, precision, recall and F-score.
Shard Task Dates
Jan 31, 2022: Registration
Feb 09, 2022: Train and Validation Data set Release [to get the data please register]
April 15, 2022: Test Set Release
April 22, 2022: System Submission Due
April 29, 2022: System Results
May 06, 2022: System Description Paper Due
May 16, 2022: Paper notification of acceptance
May 23, 2022: Camera-ready papers due
Contact
For questions related to shared tasks (a) and (b), please send an email to wildre-speechtechil@googlegroups.com and wildre-udparseil@googlegroups.com respectively.
For urgent/specific queries on the workshop or shared tasks please contact Atul Kr. Ojha at atulkumar.ojha@insight-centre.org.
Conference Chairs
Girish Nath Jha, Chairman, Commission for Scientific and Technical Terminology, MoE, GOI (on deputation from Jawaharlal Nehru University, India)
Kalika Bali, Microsoft Research India Lab, Bangalore
Sobha L, AU-KBC, Anna University
Details of the Conference Chairs
Girish Nath Jha
Chairman,
Commission for Scientific and Technical Terminology, MoE, GOI
&
Professor in Computational Linguistics,
School of Sanskrit and Indic Studies,
J.N.U., New Delhi - 110067
Phone: +91-11-26741308 (o) Email: girishjha@gmail.com
Prof. Girish Nath Jha teaches Computational Linguistics at the School of Sanskrit and Indic Studies in Jawaharlal Nehru University (JNU) and is currently the Chairman, Commission for Scientific and Technical Terminology, MoE, GOI. He also holds concurrent appointments in JNU’s Center of Linguistics, Special Center of E-Learning and is an Associated Faculty in the ABV School of Management and Entrepreneurship. Prof Jha was previously the director of JNU’s International Collaboration during 2016-18.
His research interests include Indian languages corpora and standards, Sanskrit and Hindi linguistics, Science & Technology in ancient texts, Lexicography, Machine Translation, e-learning, web based technologies, RDBMS, software design and localization. Details on his work can be obtained from http://sanskrit.jnu.ac.in. Prof Jha has done collaborative research with the Center for Indic Studies, University of Massachusetts, Dartmouth, MA, USA as "Mukesh and Priti Chatter Distinguished Professor of History of Science" during 2009-12, was visiting professor at the Yogyakarta State University, Indonesia in 2013. He has been awarded DAAD fellowships in 2014 and 2016 to teach Computational Linguistics in the Digital Humanities department at University of Würzburg, Germany and has been a visiting Professor at the University of Florence in the summer of 2016.
Prof. Jha did his M.A., M.Phil. and Ph.D. in Linguistics (Computational Linguistics) from JNU and then got another masters degree in Linguistics (specializing in Natural Language Interface) from University of Illinois, Urbana Champaign, USA in 1999. Since then he worked as software engineer and software development specialist in USA before joining JNU in 2002. Prof Jha has books published from publishers like Springer Verlag, Cambridge Scholar Publishing and has over 133 research papers/presentations/publications and over 178 invited talks. Prof Jha has had several consultancies including those from Nuance, Swiftkey, Microsoft Research USA, Microsoft Research India, Microsoft Corporation, Linguistic Data Consortium (University of Pennsylvania), University of Massachusetts Dartmouth, EZDI among others. Prof Jha has completed several sponsored projects for Indian language technology development and has led a consortium of 17 Indian universities/institutes for developing corpora and standards for Indian languages sponsored by Ministry of Electronics and IT (MEITY), Govt. of India.
Prof Jha has been chair/co-chair for at least 13 international seminars/conferences and has been nominated member of more than 30 committees. He was nominated to the editorial board of a leading journal from Springer and has been a reviewer of many leading journals and proceedings in the area of NLP. He has supervised 42 M.Phil. and 50 Ph.D. students. Prof Jha's efforts in collaboration with software industry has led to the development of key technologies for Indian languages including English-Urdu MT for Microsoft Bing Translator, predictive keyboards for several Indian languages by Swiftkey. His awards include Datta Peetha award for Sanskrit linguistics (2017), KECSS Felicitation award for promotion of Sharada script (2016).
Kalika Bali
Researcher (Multilingual Systems)
Microsoft Research Labs India
Address: “Vigyan” #9 Lavelle Road, Bangalore 560025 India
Phone: +91-80-66586218 Email: kalikab@microsoft.com
Kalika Bali is a Principal Researcher at Microsoft Research India working in the areas of Machine Learning, Natural Language Systems and Applications, as well as Technology for Emerging Markets. Her research interests lie broadly in the area of Speech and Language Technology especially in the use of linguistic models for building technology that offers a more natural Human-Computer as well as Computer-Mediated interactions, and technology for Low Resource Languages.
She is currently working on Project Mélange which tries to understand, process and generate Code-mixed language data for both text and speech. She is also interested in how social and pragmatic functions affect language use, in code-mixed as well as monolingual conversations, and how to build effective computational models of sociolinguistics and pragmatics that can lead to more aware Artificial Intelligence.
She is very passionate about NLP and Speech technology for Indian Languages. She believe that local language technology especially with speech interfaces, can help millions of people gain entry into a world that is till now almost inaccessible to them. She has served, and continues to serve, on several government and other committees that work on Indian Language Technologies as well as Linguistic Resources and Standards for NLP/Speech.
Sobha L.
CLRG Group
AU-KBC Research Centre
MIT campus of Anna University
Chennai-600044
Phone: +91-44-22232711 Email: sobha@au-kbc.org
Sobha Lalitha Devi is a scientist with the Information Sciences Division of AU-KBC Research Centre, Anna University, Chennai, India. Sobha’s research interest is in the field of Discourse analysis, Information Extraction and Retrieval. She specializes in the area of Anaphora Resolution. She is one of the key organizers of Discourse Anaphora and Anaphor Resolution Colloquium (DAARC). Other than the above areas she also works in the area of Automatic detection of Plagiarism and also organizes tracks in plagiarism detection. In the area of information retrieval she along with her students started the Tamil search engine www.searchko.in. She is involved in two major consortium projects funded by the Department of Information Technology, Government of India on Cross Lingual Information Access and Indian Language to Indian Language Machine Translation System (Tamil to Hindi bidirectional) and in an European Union(EU) funded project on WIQ-EI—Web Information Quality Evaluation Initiative. She was visiting faculty to universities in UK, Spain and Portugal. She is an Erasmus Mundus coordinator for 2010-2012 and is associated with University of Wolverhampton.
Organizing Committee
Atul Kr. Ojha, National University of Ireland, Galway, Ireland & Panlingua Language Processing LLP, India
Girish Nath Jha, Chairman, Commission for Scientific and Technical Terminology, MoE, GOI (on deputation from JNU)
Kalika Bali, Microsoft Research India Lab, Bangalore
Sobha L, AU-KBC, Anna University
Program Committee (to be updated)
- Adil Amin Kak, Kashmir University
- Anil Kumar Singh, IIT BHU, Benaras
- Anupam Basu, Director, NIIT, Durgapur
- Anoop Kunchukuttan, Microsoft AI and Research, India
- Arul Mozhi, University of Hyderabad
- Asif Iqbal, IIT Patna, Patna
- Atul Kr. Ojha, National University of Ireland, Galway, Ireland & Panlingua Language Processing LLP, India
- Bharathi Raja Asoka Chakravarthi, National University of Ireland Galway, Irelnad
- Bogdan Babych, Heidelberg University, Germany
- Chao-Hong Liu, ADAPT Centre, Dublin City University, Ireland
- Claudia Soria, CNR-ILC, Italy
- Dafydd Gibbon, Universität Bielefeld, Germany
- Daan van Esch, Google, USA
- Dan Zeman, ÚFAL, Charles University, Prague, Czech Republic
- Delyth Prys, Bangor University, UK
- Dipti Mishra Sharma, IIIT, Hyderabad
- Diwakr Mishra, Amazon-Banglore, India
- Dorothee Beermann, Norwegian University of Science and Technology (NTNU)
- Elizabeth Sherley, IITM-Kerala, Trivandrum
- Esha Banerjee, Google, USA
- Eveline Wandl-Vogt, Austrian Academy of Sciences, Austria
- Georg Rehm, DFKI, Germany
- Girish Nath Jha, Chairman, Commission for Scientific and Technical Terminology, MoE, GOI and JNU, New Delhi
- Jan Odijk, Utrecht University, The Netherlands
- Jolanta Bachan, Adam Mickiewicz University, Poland
- Joseph Mariani, LIMSI-CNRS, France
- Jyoti D. Pawar, Goa University
- Kalika Bali, MSRI, Bangalore
- Khalid Choukri, ELRA, France
- Lars Hellan, NTNU, Norway
- M J Warsi, Aligarh Muslim University, India
- Malhar Kulkarni, IIT Bombay
- Manji Bhadra, Bankura University, West Bengal
- Marko Tadic, Croatian Academy of Sciences and Arts, Croatia
- Massimo Monaglia, University of Florence, Italy
- Monojit Choudhary, MSRI Bangalore
- Narayan Choudhary, CIIL, Mysore
- Nicoletta Calzolari, ILC-CNR, Pisa, Italy
- Niladri Shekhar Dash, ISI Kolkata
- Partha Talukdar, Google Research, India
- Panchanan Mohanty, GLA, Mathura
- Pinky Nainwani, Cognizant Technology Solutions, Bangalore
- Pushpak Bhattacharya, IIT Bombay
- Rajeev R R, ICFOSS, Trivandrum
- Ritesh Kumar, Agra University
- Shantipriya Parida, Silo AI, Finland
- S.S. Agrawal, KIIT, Gurgaon, India
- Sachin Kumar, EZDI, Ahmedabad
- Santanu Chaudhury, Director, IIT Jodhpur
- Shivaji Bandhopadhyay, Director, NIT, Silchar
- Sobha L, AU-KBC Research Centre, Anna University
- Stelios Piperidis, ILSP, Greece
- Subhash Chandra, Delhi University
- Swaran Lata, Retired Head, TDIL, MCIT, Govt of India
- Vijay Kumar, TDIL, MCIT, Govt of India
- Virach Sornlertlamvanich, Thammasat Univeristy, Bangkok, Thailand
- Vishal Goyal, Punjabi University, Patiala
- Zygmunt Vetulani, Adam Mickiewicz University, Poland
Inaugural Speaker: TBD
Keynote Speaker: TBD
Valedictory Speaker: TBD
Panel Discussion: TBD
Workshop Programme
| 14:00–14:45 | Inaugural session |
14:00–14:05 | Welcome by Workshop Chairs |
14:25–15:00 | Keynote Lecture
Title: LITMUS – Linguistically Informed Training and Testing of Multilingual Systems
Abstract:
Massively multilingual language models (MMLM) offer the promise of truly universalizing NLP technology across languages through their ability of crosslingual zero-shot and few-shot transfer. One no longer needs a large, annotated sentiment corpus in Telugu or a paraphrase training set for Marathi to train state-of-the-art sentiment/paraphrase systems for these languages. Such labeled data in a few languages ( say English and Hindi) coupled with powerful MMLM is sufficient to solve the task. However, we need at least some labeled data in Telugu or Marathi for testing the resultant systems. Unfortunately, we do not even have sufficient test data for tasks and languages of interest.
In this talk, I will give an overview of Project LITMUS – Linguistically Informed Training and Testing of Multilingual Systems, where we build several ML models for predicting the performance of cross-lingual zero-shot and few-shot transfer for a task on target languages with little or no test data. As we shall see, performance prediction also indirectly helps us to predict training data configurations that would give certain desired performance across a set of languages, and accordingly strategize data collection plans. Furthermore, it allows us to unravel factors which influence cross-lingual transfer – a hard and important problem.
|
15:00–16:00 | Oral Session-I |
15:00–15:30 | Introducing EM-FT for Manipuri-English Neural Machine Translation
Rudali Huidrom and Yves Lepage |
15:30–16:00 | L3Cube-HingCorpus and HingBERT: A Code Mixed Hindi-English Dataset and BERT Language Models
Ravindra Nayak and Raviraj Joshi |
16:00–16:30 | Coffee break/Poster Session |
16:00–16:30 | Leveraging Sub Label Dependencies in Code Mixed Indian Languages for Part-Of-Speech Tagging using Conditional Random Fields.
Akash Kumar Gautam |
16:00–16:30 | HindiWSD: A package for word sense disambiguation in Hinglish & Hindi
Mirza Yusuf, Praatibh Surana and Chethan sharma |
16:00–16:30 | Pāṅinian Phonological Changes: Computation and Development of Online Access System
Sanju . and Subhash Chandra |
16:00–16:30 | L3Cube-MahaNER: A Marathi Named Entity Recognition Dataset and BERT models
Onkar Litake, Maithili Ravindra Sabane, Parth Sachin Patil, Aparna Abhijeet Ranade and Raviraj Joshi |
16:00–16:30 | Identifying Emotions in Code Mixed Hindi-English Tweets
Sanket Sonu, Rejwanul Haque, Mohammed Hasanuzzaman, Paul Stynes and Pramod Pathak |
16:00–16:30 | Digital Accessibility and Information Mining of Dharmaśāstric Knowledge Traditions
Arooshi Nigam and Subhash Chandra |
16:00–16:30 | Language Resource Building and English-to-Mizo Neural Machine Translation Encountering Tonal Words
Vanlalmuansangi Khenglawt, Sahinur Rahman Laskar, Santanu Pal, Partha Pakray and Ajoy Kumar Khan |
16:00–16:30 | Classification of Multiword Expressions in Malayalam
Treesa Cyriac and Sobha Lalitha Devi |
16:00–16:30 | Bengali and Magahi PUD Treebank and Parser
Pritha Majumdar, Deepak Alok, Akanksha Bansal, Atul Kr. Ojha and John P. McCrae |
16:00–16:30 | Makadi: A Large-Scale Human-Labeled Dataset for Hindi Semantic Parsing
Shashwat Vaibhav and Nisheeth Srivastava |
16:00–16:30 | Automatic Identification of Explicit Connectives in Malayalam
Kumari Sheeja S and Sobha Lalitha Devi |
16:00–16:30 | Web based System for Derivational Process of Kṙdanta based on Pāṅinian Grammatical Tradition
Sumit Sharma and Subhash Chandra |
16:00–16:30 | Universal Dependency Treebank for Odia Language
Shantipriya Parida, Kalyanamalini Shabadi, Atul Kr. Ojha, Saraswati Sahoo, Satya Ranjan Dash and Bijayalaxmi Dash |
16:00–16:30 | Computational Referencing System for Sanskrit Grammar
Baldev Khandoliyan and Ram Kishor |
16:30–17:00 | Oral Session-II |
16:30–17:00 | L3Cube-MahaCorpus and MahaBERT: Marathi Monolingual Corpus, Marathi BERT Language Models, and Resources
Raviraj Joshi |
17:00–17:45 | Panel discussion |
17:45–17:55 | Valedictory Session |
17:55–18:00 | Vote of Thanks |
Our Collaborators
Benefits to our Sponsors
Opportunity to demo your technology
Present a poster of your reserach
Opportunity to participate in te panel discussion
Please contact Prof. Girish Nath Jha (girishjha@jnu.ac.in) for sposnorship related queries
|