About Me

I am a Ph.D. candidate in Computer Science at Harvard, specializing in programming languages, formal methods, and type systems. My research focuses on building programming and verification tools that can keep pace with ever-evolving software.

Some of the things I’ve worked on include: designing language features for modular and extensible programming, blending formal methods with LLMs to synthesize verified code, building tools that check whether programs in different languages actually do the same thing, and finding ways to make proof developments more maintainable. What unifies my work is the belief that safety and correctness guarantees should be built into the languages and tools that we use to create software.

I’m lucky to be advised by Prof. Nada Amin in the Metareflection Lab, and I’ve also spent two summers as an intern with the Automated Reasoning Group at AWS. These experiences have shaped my passion for collaborative, big-picture projects that push boundaries while keeping a sense of optimism.

Outside of work, you can find me on a scenic hiking trail (snacks required), wandering around small towns with my husband, or at home hanging out with my cats.

Research Experience

Applied Scientist Intern
Amazon Web Services (AWS)
Summer 2023, 2024
During my time as an intern at AWS, I worked with the Dafny team on making proofs and code more maintainable long-term, with a focus on module refinement and backward compatibility. I enjoyed coming back for a second internship, where I built new features for a cross-language equivalence checker, automating generation of comparison predicates for complex heap-based data types across different programming languages.
Doctoral Researcher
Harvard University
2020 - Present
During my Ph.D., I developed Persimmon, a family polymorphic calculus that supports extensible variant types, pattern matching, and modular code reuse through nested family polymorphism (OOPSLA 2024). I am currently implementing my insights from Persimmon into LLM-driven tools that synthesize verified code, exploring how AI can integrate with formal methods to guarantee correctness (LMPL 2025). My research has been supported by an NSF Award and an Amazon Research Award, and I’ve had the privilege of sharing my work as an invited speaker at OOPSLA 2024, NEPLS 2025, and TYPES 2021.
Graduate Researcher
Princeton University
2018 - 2020
During my Master’s, I focused on making data structures both reusable and verifiably correct. I designed a general API for ordered database indexes and used it to formally verify a B+-tree with cursors, ensuring safety and correctness through mechanized proofs in Rocq. I then packaged this work as a modular, formally verified C component that can be cleanly reused in larger systems. This work was recognized with an invitation to present at CoqPL 2021, where I shared it as a model for building verified, reusable infrastructure components.

Publications

Anastasiya Kravchuk-Kirilyuk, Fernanda Graciolli, Nada Amin
Short paper to appear in LMPL 2025
Anastasiya Kravchuk-Kirilyuk, Gary Feng, Jonas Iskander, Yizhou Zhang, Nada Amin
Anastasiya Kravchuk-Kirilyuk, Antoine Voizard, Stephanie Weirich

Education

Harvard University
Ph.D. in Computer Science, 2026
Princeton University
M.S.E. in Computer Science, 2020
University of Pennsylvania
B.S.E. in Computer Science, 2017

Skills

Programming Languages

Scala • Java • Python • C# • OCaml • R

Software Engineering

Language Design • Abstraction Design • Modularity • Extensibility • Functional Programming • LLM-Assisted Program Synthesis

Formal Verification

Rocq • Dafny • Boogie • Z3 • Verified Software Toolchain (VST) • Automated Reasoning • Proof Evolution • Proof Synthesis

Theses

Master's Thesis
Princeton University, 2020