Istanbul · Stanford · New York

Defne
Genç

What I think about most as a technologist is systems that are context-aware, human-centered, and so well fitted to a person's life that they stop feeling like technology at all.

Background

I grew up in Istanbul, Turkey, and came to Stanford for undergrad, where I studied Symbolic Systems. I stayed another year for an MS in CS, spending most of it on Bloom, an LLM-augmented physical activity coaching app we built in Prof. James Landays Interaction Design Lab. I was second author on the paper, which won Best Paper at CHI 2026 (top 1% of submissions). I also took Arabic that year because I have a goal of speaking six languages before I turn 30.

I spent a lot of time teaching too. I was a course assistant for Stanfords core HCI sequence ( CS 147, CS 278, and CS 347), and it was arguably my favorite thing about being at Stanford. My students taught me just as much as my classes did. I led design studios taking student projects from early interviews to working prototypes and ran weekly seminar sections on HCI research. Someday Id like to use what I know about HCI to think more seriously about education itself.

Now Im an APM at Coinbase on the institutional derivatives team, working on perpetual futures, dated futures, and options on one of the largest regulated crypto derivatives venues in the world.

What I'm
thinking about
Recommendation systems using LLMs

I’m building two systems right now that approach this question from different angles. Menuto uses an LLM agent as the final reasoning layer over 8 traditional scoring signals (embeddings, popularity, behavioral history) to recommend restaurant dishes, with Bayesian weight learning that adapts per user over time. Learning Et Al. uses hybrid ranking (BM25 + semantic embeddings via Reciprocal Rank Fusion) to find academic papers, then a 15-call synthesis pipeline to make them readable. The interesting question across both: what’s the right balance of agentic reasoning and rule-based scoring? When should the LLM override traditional signals, and when should it defer? What makes an LLM-powered system actually good at ranking, not just good at generating text about rankings?

Interfaces that surface strengths

Current behavior change systems have two problems: rigid interfaces that fail diverse populations, and negative feedback loops that undermine the outcomes they’re designed to support. With Bloom, we found that the LLM coach’s primary value was psychological, not behavioral: surfacing behaviors people already do so they realize they’re doing more than they’ve given themselves credit for. I want to build adaptive systems that learn from ambient patterns and surface positive behaviors, through contextual interventions, editable ambient widgets, and tangible objects that carry personal meaning.

Can AI have taste, or is it all slop?

Technology that learns from human output tends, by design, to regress toward the mean. The most represented wins. But taste is the opposite: it’s about having strong preferences, specific references, a point of view. I want to think about how to give AI genuine spikes, the way humans have them. A lot of taste is built in the physical world, through things you touch, spaces you move through, what people wear on the street. I see potential in ubiquitous computing to ground AI taste in real-world experience, rather than just what has been written about it.

Otherwise
Origin
Istanbul, Turkey
Based
New York City
Education
Stanford MS CS (HCI) · BS SymSys
Current role
APM @ Coinbase
Teaching
CS 147 · CS 278 · CS 347
Languages
Turkish (native), English (fluent), French (conversational), Arabic (elementary), Spanish (elementary)
Research
Landay Lab (Computer Science) · Kuo Lab (Stanford Medicine)
Email
defneg@stanford.edu
LinkedIn
linkedin.com/in/-defne
GitHub
github.com/defnegenc
Résumé
View résumé