{"id":144,"date":"2025-09-09T21:36:07","date_gmt":"2025-09-09T19:36:07","guid":{"rendered":"https:\/\/stefanescu.lu\/?p=144"},"modified":"2025-09-10T17:42:36","modified_gmt":"2025-09-10T15:42:36","slug":"scratching-the-vibe-the-scratch","status":"publish","type":"post","link":"https:\/\/stefanescu.lu\/?p=144","title":{"rendered":"Scratching the Vibe &#8211; The Scratch"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"683\" height=\"1024\" src=\"https:\/\/stefanescu.lu\/wp-content\/uploads\/2025\/09\/little-obsidize-boy-683x1024.jpeg\" alt=\"\" class=\"wp-image-145\" style=\"width:595px;height:auto\" srcset=\"https:\/\/stefanescu.lu\/wp-content\/uploads\/2025\/09\/little-obsidize-boy-683x1024.jpeg 683w, https:\/\/stefanescu.lu\/wp-content\/uploads\/2025\/09\/little-obsidize-boy-200x300.jpeg 200w, https:\/\/stefanescu.lu\/wp-content\/uploads\/2025\/09\/little-obsidize-boy-768x1152.jpeg 768w, https:\/\/stefanescu.lu\/wp-content\/uploads\/2025\/09\/little-obsidize-boy.jpeg 832w\" sizes=\"auto, (max-width: 683px) 100vw, 683px\" \/><\/figure>\n\n\n\n<p>To build <a href=\"https:\/\/github.com\/stefanesco\/obsidize\" title=\"\">obsidize<\/a> I started with a simple\u2014or at least straightforward\u2014problem: convert data stored in JSON files to Markdown.<\/p>\n\n\n\n<p>On paper, conversion between standard formats should be one of the strong points of LLMs. In practice, a direct request to Claude or ChatGPT exposed a familiar disadvantage: their wordy answers and tight context limits. The exported files from Claude were simply too big for the standard web interfaces.<\/p>\n\n\n\n<p>And it wasn\u2019t going to be a one-off operation. I\u2019d keep using Claude, creating new conversations, extending old ones. I didn\u2019t want to manually manage exports each time.<\/p>\n\n\n\n<p>So my scope grew. What I actually needed was a tool that could:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Convert JSON data to Markdown files.<\/li>\n\n\n\n<li>Update existing conversations\/projects in Markdown with new content, without overwriting edits made directly in Obsidian.<\/li>\n<\/ol>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Parsers are supposed to be another strength of LLMs\u2014but the context window ruled out Claude and ChatGPT for this first stage.<\/p>\n\n\n\n<p>That\u2019s when I turned to Google\u2019s&nbsp;<a href=\"https:\/\/deepmind.google\/models\/gemini\/pro\/\">Gemini 2.5 Pro<\/a>. With its huge context window, Gemini had no problem accepting an entire conversation file and proposing a parser in Clojure\/Babashka that ran successfully on the first go. Same for the projects file. This is Gemini\u2019s real advantage: not something impossible for other GPTs, especially in agent mode with MCP, but far easier when you just need a quick, working answer.<\/p>\n\n\n\n<p>Now I had two tools\/scripts doing essentially the same thing\u2014converting JSON to Markdown\u2014but no update logic.<\/p>\n\n\n\n<p>So I decided to simplify first: solve the conversion problem cleanly before tackling updates.<\/p>\n\n\n\n<p>I spun up a standard Clojure project structure (with&nbsp;<a href=\"https:\/\/github.com\/seancorfield\/deps-new\">deps-new<\/a>), ported Gemini\u2019s Babashka scripts into namespaces, and asked Claude-Code to turn it into a CLI application.<\/p>\n\n\n\n<p>We discussed the tech stack. For JSON parsing, Claude suggested (and I accepted)&nbsp;<a href=\"https:\/\/github.com\/dakrone\/cheshire\">Cheshire<\/a>, a well-established, high-performance library built on Jackson.<\/p>\n\n\n\n<p>Then came the real challenge: implementation choices.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>Copilots tend to overdo it. They generate too many options, and every option looks equally viable. The risk is saying \u201cyes\u201d too often and ending up in a swamp of half-working approaches. If you already know the right solution, you can use AI just to speed up coding\u2014but that underuses the tools. If, like me, you often&nbsp;<em>don\u2019t<\/em>&nbsp;know the right approach, you need to push back and test constantly. Otherwise you get caught in endless loops of \u201cah, I see the error \/ the issue is still there.\u201d<\/p>\n\n\n\n<p>With Gemini\u2019s proof-of-concept already working, I just needed to refactor and add a few key features:<\/p>\n\n\n\n<ol start=\"1\" class=\"wp-block-list\">\n<li>Detect input as either an archive (zip\/dms) or folder.<\/li>\n\n\n\n<li>Tag and link imported notes for Obsidian.<\/li>\n\n\n\n<li>Most importantly, detect already-imported notes and update them without overwriting edits in Obsidian.<\/li>\n<\/ol>\n\n\n\n<p>At this point, what started as \u201ctidying up my notes\u201d turned into a small personal quest. I wanted to scratch my own itch, but also to try this new way of building software with new tools. Luckily, my vacation plans were simple: kids, grandparents, clean air, fresh food, plenty of sun.<\/p>\n\n\n\n<p>After about two hours of effective work\u2014spread over three mornings\u2014I had a working Clojure project. Not packaged, but functional. It had documentation, unit tests, and just enough features to be useful.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n\n\n\n<p>One incidental advantage of working in VS Code with Gemini and Claude-Code: context switching was painless. Every morning, I could skim the conversation history, remember where we left off, and pick up seamlessly. When the kids woke up, I\u2019d close the lid and shift gears. The next morning, my virtual coworkers were still waiting, ready to continue as if nothing had happened.<\/p>\n\n\n\n<p>With the basics so easy to conjure with digital assistants\u2014and mornings still left in my vacation\u2014I felt confident enough to push toward a real app. The linter and automated tests were already there, but I wanted more:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Add&nbsp;<a href=\"https:\/\/github.com\/liquidz\/antq\">antq<\/a>&nbsp;to ensure up-to-date libraries.<\/li>\n\n\n\n<li>Add&nbsp;<a href=\"https:\/\/trivy.dev\/\">Trivy<\/a>&nbsp;for vulnerability and license scanning.<\/li>\n\n\n\n<li>Build native images for macOS and Linux using&nbsp;<a href=\"https:\/\/www.graalvm.org\/\">GraalVM<\/a>.<\/li>\n\n\n\n<li>Make it installable via&nbsp;<a href=\"https:\/\/brew.sh\/\">Homebrew<\/a>.<\/li>\n<\/ul>\n\n\n\n<p>That last step\u2014going from project to product\u2014turned out to be the hardest.<\/p>\n\n\n\n<p><strong>To be continued: <a href=\"https:\/\/stefanescu.lu\/?p=134\" title=\"The Vibe\">The <\/a><a href=\"https:\/\/stefanescu.lu\/?p=139\" title=\"The Vibe\">Vibe<\/a><\/strong><\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>To build obsidize I started with a simple\u2014or at least straightforward\u2014problem: convert data stored in JSON files to Markdown. On paper, conversion between standard formats should be one of the strong points of LLMs. In practice, a direct request to &hellip; <a href=\"https:\/\/stefanescu.lu\/?p=144\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-144","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/stefanescu.lu\/index.php?rest_route=\/wp\/v2\/posts\/144","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/stefanescu.lu\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/stefanescu.lu\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/stefanescu.lu\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/stefanescu.lu\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=144"}],"version-history":[{"count":5,"href":"https:\/\/stefanescu.lu\/index.php?rest_route=\/wp\/v2\/posts\/144\/revisions"}],"predecessor-version":[{"id":178,"href":"https:\/\/stefanescu.lu\/index.php?rest_route=\/wp\/v2\/posts\/144\/revisions\/178"}],"wp:attachment":[{"href":"https:\/\/stefanescu.lu\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=144"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/stefanescu.lu\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=144"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/stefanescu.lu\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=144"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}