Dear All, You are kindly invited—together with your students—to attend the upcoming AVOBMAT <https://avobmat.hu/> webinar (hosted by GWDG <https://gwdg.de/>). The webinar will present AVOBMAT’s key features and use cases and will conclude with a live Q&A session. *Time:* Wednesday, 11 February, 15:00–16:30 CET *Register* here: https://events.gwdg.de/event/1361/ <https://events.gwdg.de/event/1267/> Since launching in October 2025, AVOBMAT has been used in 40 countries. Recent use in teaching and research In the last couple of months, AVOBMAT has been presented and used at several events and university courses, including: - *Print Culture and Public Spheres in Central Europe (1500–1800)* COST Action Hackathon <https://pcps-ce.eu/pcpsce-vienna-hackathon-report/> (University of Vienna) - "Multilingual Analysis and Visualization of Metadata and Texts in Central and Eastern European Literary Studies,” Digital Methods for a Comparative Study of Central European Literary History <https://ceelawl.abtk.hun-ren.hu/digital-methods/workshop-2.html> (University of Krakow) - *Emigrants’ Heritage Workshop <https://www.hungarianconservative.com/articles/diaspora/emigrants-heritage-workshop-joint-project-american-hungarian-anniversaries/>* (Hungarian National Archives) - IT course <https://www.wdib.uw.edu.pl/wydzial/struktura-wydzialu/katedry/katedra-informatologii> (University of Warsaw) What AVOBMAT enables We have preprocessed *5.4 billion tokens* across multiple databases and made *32 databases publicly available*. *Generative AI* and LLMs are increasingly present in teaching and research, but LLM outputs can be difficult to reproduce without careful constraints and documentation. AVOBMAT is built for *transparent, reproducible, corpus-scale analysis*. It uses transformer language models to enrich texts and metadata, enabling interactive analysis across collections—so researchers and students can focus on critical interpretation and discovery. *What sets AVOBMAT apart for DH researchers, teachers, and libraries/GLAM:* - *Ready-to-use corpora* for teaching: 32 public databases for seminar-based assignments - *Explore and share your own collections*: analyse your datasets and share private databases for collaboration - *Scale:* 5.4B tokens preprocessed across multiple databases - *Reproducible workflows*: explicit steps and documented parameters for verification and replication - *Multilingual and accessible*: *25 languages*, no programming or costly hardware required - *Extensible* by design: modular architecture with potential future integration of task-specific open-weight LLMs Try AVOBMAT with your own documents To unlock the full potential of your documents, submit an *upload request* with your preferred configuration settings. *You don’t need any programming experience*—just follow the upload steps in the Help menu or on GitHub. Sample databases and files are available in our GitHub repository <https://github.com/avobmat/general>. Feedback and collaboration Help us improve AVOBMAT. Please complete our 2–3 minute feedback <https://docs.google.com/forms/d/e/1FAIpQLSfHKSTSZ6CI4REyONZoIcp-k3_9qJb9gZhKTdT5SduzsIxsUQ/viewform> form. Interested in using AVOBMAT in a course, research project, or library/GLAM workflow? Please *contact* me to discuss a collaboration. *Thank you* for your feedback. We look forward to hearing from you. Róbert Péter on behalf of the AVOBMAT Team