I have felt this a lot when designing the landing page for my SQL canvas side project. _I_ really want to write about DuckDB WASM, pre-signed URLs and how cool Cloudflare's durable objects are.
But my target audience are data analysts, and they just want to analyze some data!
I have gone through a lot of design revisions because I have a hard time containing my technical excitement. I was surprised how hard communicating a product clearly is.
As a backend/data person I was on the high horse thinking that designers jobs are so much easier than distributed systems. Now I feel the opposite!
Maybe that's why I am not in your target audience, but love how the design looks. I have bookmarked it also.
You show so many features and it is nice in the way it is being presented and is also mobile friendly. Also I too am a fan of neobrutalism. :)
I remember p2hari commenting on one of my "What are you working on" comments, so maybe they got it from there. Anyway, here's the link: https://kavla.dev/
I totally agree on investing in a sane data model upfront. So many production systems have schemas that only made sense to the engineer that created them. I would be delighted if I can read a schema and understand what a column means without having to dig through a bunch of migration PRs.
I recently encountered `is_as BOOL` in an important table. After way too much invested time we found out it meant "is active service". </DDL rant>
I integrate with many ERPs and this is the bane of my existence.
One of the worst has field names like `ft_0001...N` and table names like `UNCC_00001...N`, all in `text` fields (even numbers!), zero FK, almost no indexes and what are views?
The other has this funny field that is a blob that need decoding using a specific FreePascal version. The field? Where is the price of the product.
Other has, in the same column, mix of how handling "," or "." for numbers and
I need to check the digital places to deduce which.
FUN.
P.D: I normalize all this Erps into my own schema and has get praise for things like, my product table is called products.
I may have worked with that one. Did it have a parallel schema that mapped tables and fields with legible/customisable names, so every SQL call had to join the mapping tables to hit the required table and fetch the fields you were after?
Wrote a Windows .Net program once upon a time to convert the data from other financial CRM systems into the system I worked on. Built a data mapping tool as no customer we onboarded placed "custom" data in the same tables or fields even when using the same financial system.
I actually miss doing that kind of work, my brain seems to be wired to find it fun. Writing SQL is one thing I don't delegate to an AI or even an ORM like Doctrine.
I think the best db schema I had the displeasure of working with was one where it was a requirement that every table and column name NOT have vowels, except for the few that could, and "the few that could" were governed entirely by a spreadsheet owned by the DB admin.
And so you got tables like LANDMRK and columns like RCR_RCRDR.
I work with an Oracle database like this. In the old days, there was a 30 character limit on column names, so you end up with conventions like no vowels. The limit no longer exists today, but the DBA continues to enforce the limit on new columns.
I never got an answer when I asked. This same government agency also got extremely mad when our dev manager upgraded the ASP.NET version for one project because it had some really useful features we were developing with. They deleted his permissions to deploy to production from there until the end of time, requiring us to email someone each time we wanted to update the application. It was great.
I recently switched from Shotcut to Kdenlive. Kdenlive's UX is much more intuitive. Lots of features, I still feel like a beginner, which is such a fun feeling!
I'm using it together with OBS to post short demo videos of my side project. I could use Loom I guess, but I prefer to keep my tech stack FOSS when I can.
Creating "non standard" video resolutions is a bit of a pain though. But I've solved that with an ffmpeg oneliner.
But that's a single use case. I also can put a Postgres DB in front of Iceberg, add some views in Trino to consume the data and get faster streaming ingestion. I know this sounds like that Dropbox is just rsync+cron post, but as soon as you replace pure insertions with upserts the gap should vanish.
I've worked with data my entire career. We need to alt tab so much. What if we put it all on a canvas? Thats what I'm building with Kavla!
Right now working on a CLI that connects a user's local machine to a canvas via websockets. It's open source here: https://github.com/aleda145/kavla-cli
Next steps I want to do more stuff with agents. I have a feeling that the canvas is an awesome interace to see agents working.
That's right! Notebooks are great when you have a specific goal in mind. I think a canvas is nicer when you are doing exploratory data analysis so you can branch out.
Maybe I should actually call out notebooks specifically on the landing page! Thanks
Great stack! I'm doing a similar approach for my latest project (kavla.dev) but using fly.io and their suspend feature.
Scaling to zero with database persistence using litestream has cut my bill down to $0.1 per month for my backend+database.
Granted I still don't have that many users, and they get 200ms of extra latency if the backend needs to wake up. But it's nice to never have to worry about accidental costs!
I'm completely sold on the canvas layer. Embracing non linearity is such a boon when you're on the ideas stage. When you have verified it though, moving it to another medium (a document, presentation or just code) is often the best choice.
Do you see the canvases created with Spine as "one off" that you discard when you have got your deliverable, or as something living that you keep around?
I'm building a side project for running SQL on a canvas (kavla.dev), so I'm thinking about canvas workflows all the time!
Thanks! Great question. We see canvases as living workspaces, you can revisit, iterate on, and build on them over time.
But the deliverables (docs, slides, code) are first-class outputs you can export and use independently. So it works both ways depending on the workflow.
Kavla looks cool, canvas-based SQL is a great use case for this kind of thinking!
I have worked with data for a while. I feel like our tools could be much better when it comes to "flow". I want an experience where you don't need to alt+tab to slack/images/another query. What if we put it all on a canvas? That's what Kavla is all about!
Since last month I've done a lot of improvements to the editor to make the "flow" better.
I've also read up on HMAC, Nonces and fun encryption stuff to create read only boards.
But my target audience are data analysts, and they just want to analyze some data!
I have gone through a lot of design revisions because I have a hard time containing my technical excitement. I was surprised how hard communicating a product clearly is.
As a backend/data person I was on the high horse thinking that designers jobs are so much easier than distributed systems. Now I feel the opposite!
reply