Short version:we fetch public GitHub activity, quantify a handful of behavioral signals, and hand them to a language model with tight tone rules. The model produces a score, a title, three strengths, three red flags, and a one-sentence forecast. That’s it. No magic.

This is a satirical diagnostic, not a dating app and not a hiring tool. It exists because a developer’s GitHub often says more about their habits than any bio. Sometimes flatteringly. Often not.

What we look at

Only public data from the GitHub REST API. We never see private repos, DMs, or anything you haven’t already shown the world.

Signals we extract

  • Cadence — commits per window, current streak, longest streak.
  • Night-owl ratio — fraction of commits between 00:00–05:00 UTC.
  • Force-push count — how often history gets rewritten.
  • Review behavior — PR review comments given, tone, responsiveness to @-mentions.
  • Commit-message hygiene — average length, ratio of one-word commits.
  • Collaboration footprint — organizations, unique collaborators, issues answered.
  • Star and repo portfolio — total stars, primary language, account age.

How the score is built

The raw signals become a structured prompt. The language model weighs them into five sub-scores (Work Ethic, Emotional Stability, Care for Others, Aesthetic & Order, Social Life) and one headline number. We don’t hand-tune weights; the model interprets the signals as a reader would.

The receipts shown in each section (“77 commits on Sunday last month”, “last review 17 days ago”) are grounded in the actual signal extraction. If a number sounds suspiciously specific, it probably is — a quoted datum pulled straight from your events feed.

What we won’t do

  • Mock your identity, appearance, origin, or pronouns.
  • Share your analysis unless you explicitly click publish. (That flow isn’t live yet; when it is, it will be explicit.)
  • Scrape private repos, DMs, or anything not already public on github.com.
  • Store your results on our servers in this version. Analyses live in a short-lived cache and evaporate.

Model

Analyses are produced by a third-party language model. The provider and model are configured per deployment. The code is provider-agnostic — see the source. Today we run llama-3.3-70b-versatile via Groq.

If you’re a subject

Anyone can run /u/<your-handle> against any public GitHub username, including their own. Nothing is indexed or published. If a version of a public profile does appear on our leaderboard in the future, removal will be self-service.

If the analysis feels off — it’s probably wrong. A language model reading 90 events of a 10-year career has limited information. Take it in the spirit of a tabloid horoscope.


Found a bug, a rude output, or a false claim? Open an issue at github.com/dastanko/githusb.