On What Photos Remember

Every photo taken on a smartphone carries more than the image itself. There’s the visible—faces, places, moments frozen in pixels. And there’s the invisible—metadata whispered into the file, signals that machines can extract, patterns that emerge only when you look across many images at once.

I’ve been building tools to surface these signals. Not to judge them, but to see them. The question that keeps returning: what can this information actually teach us about ourselves?

What a single photo contains

The raw data is surprisingly rich:

Metadata from the device

  • When the photo was taken (date, time, sometimes down to the second)
  • Where it was taken (GPS coordinates, if location services were on)
  • What device captured it (camera make, model, lens characteristics)
  • Technical settings (orientation, sometimes exposure data)

Signals extractable from the image

  • Faces present: how many, where positioned, relative sizes
  • Facial geometry: eye openness, mouth shape, head angle, gaze direction
  • Body poses: posture, arm positions, spatial relationships between people
  • Scene context: indoor or outdoor, time of day (from lighting), general environment type

What emerges from patterns

  • Who appears frequently in your photos
  • Where you tend to be, and when
  • How your expressions cluster over time
  • The rhythm of your photography itself—bursts and silences

The limits of extraction

None of this tells us what matters most. A smile detected at 73% intensity doesn’t mean happiness. A photo taken at 2am doesn’t mean insomnia—it might mean a celebration, a feeding a newborn, or a red-eye flight. GPS coordinates in Paris don’t capture whether you were lonely or in love.

The signals are geometric, temporal, statistical. They describe surfaces, not depths.

And yet.

Where self-knowledge might live

The value isn’t in any single measurement. It’s in the aggregate, viewed with honest eyes.

If I look at a year of photos and notice:

  • I rarely appear in my own photos
  • The faces that appear most often are colleagues, not friends
  • Most photos are taken in transit, few at rest
  • My expressions in group photos differ from solo selfies

These patterns don’t tell me what to feel. They show me what I’ve been doing. The interpretation—the meaning—that’s mine to make.

There’s something Taoist in this approach. The data doesn’t push; it reflects. Like water showing you your own face, it offers no opinion. You bring the questions; the patterns suggest where to look for answers.

A tool for reflection, not judgment

The danger is obvious: these tools can be used to surveil, to categorize, to reduce people to profiles. I don’t want to build that.

What I’m trying to build is a mirror. One that shows you patterns in your own life, patterns you might not notice because you’re too close to see them. Not to tell you what’s wrong, but to prompt the question: is this what I intended?

The photo library on your phone is an accidental diary. Years of moments, accumulating. Most of us never read it. We scroll past, looking for a specific memory, never asking what the collection as a whole might reveal.

Questions worth asking

If you could see the aggregate truth of your photos:

  • Who do you photograph most? Is that who you want to remember?
  • Where do your photos cluster? Is that where your life is, or where you wish it was?
  • When do you take photos? What prompts you to capture a moment?
  • How do your expressions change across contexts? With whom do you seem most at ease?

These aren’t questions a machine can answer. But a machine can surface the data that makes the questions possible.

The practice

I don’t think this kind of reflection should be automated. No app should tell you “you seem happier on weekends” or “you don’t photograph your family enough.” That crosses from observation into judgment.

Instead, the practice might be simpler: periodically look at what the signals say. Sit with the patterns. Notice what surprises you. Ask yourself why.

The photo knows when and where. The face detection knows expressions and positions. Only you know what it meant.


This thinking led to the image analysis tool on this site—an experiment in surfacing signals without interpreting them. Upload a photo, see what’s extractable. The meaning remains yours to find.