4 votes

What programming/technical projects have you been working on?

This is a recurring post to discuss programming or other technical projects that we've been working on. Tell us about one of your recent projects, either at work or personal projects. What's interesting about it? Are you having trouble with anything?

2 comments

  1. Eji1700
    Link
    Per my last topic here i'm pounding away at getting cosmos cloud setup locally so that I can eventually open it up either directly or with a vpn. At the state where I need to figure out what the...

    Per my last topic here i'm pounding away at getting cosmos cloud setup locally so that I can eventually open it up either directly or with a vpn.

    At the state where I need to figure out what the actual architecture should be, and need to look into that. Ideally i'd like it to ALWAYS work locally, because that's kinda the whole point that even if the web is down, I should be able to access whatever I want on it.

    From there it would be nice if the internet is working to be able to access remotely. I know I can VPN in, but it's a question of if I think the few other people I want to give access to can, and if not, what's a more reasonable workaround.

    2 votes
  2. tauon
    (edited )
    Link
    (I had already what I’m writing about here before last week’s recurring post went up, but didn’t have the energy at the time to write about it.) One of my favorite command line tools that sees...

    (I had already what I’m writing about here before last week’s recurring post went up, but didn’t have the energy at the time to write about it.)

    One of my favorite command line tools that sees near-daily use is also pretty much the only one I’ve written myself: trl. You can translate stuff with it. It can be piped into (for example from within editors!!) and otherwise follows shell mannerisms you’d expect to see. About two years ago, I wrote it so I wouldn’t have to go and open up a browser plus website or desktop app/widget every time I wanted to briefly get a translation for a single word. (And because, by chance, I had found out about the DeepL API’s generous 500k characters per month free tier limit.)

    Up until last week, it was also not much more than a dead simple wrapper around that API. I’ve now rewritten it so I’ll be able to fairly easily add more translating services to it, in expectance of a future public Kagi Translate API. As a proof-of-concept (and sanity check) for the feature, I’ve added basic support for locally self-hosted LibreTranslate to it, which did work nicely.

    But what I’m really looking forward to is just piping a paragraph or even entire pages/the whole document into it (from within my beloved Helix editor of course) for a future “proofread API” – that is, language in equaling language out, possibly with annotations too. Earlier this year, I wrote my first longer paper/thesis in my second language (English) and already used trl there to self-check on vocabulary questions by going EN (my phrasing) → first language → EN (machine translation), which albeit being nice because I could do it with the surrounding context in sight, felt a bit clunky still.

    I furthermore had this idea I’m not sure about yet: I could also use a classic LLM CLI tool/API wrapper like aichat with a corresponding system prompt/“role” to simulate yet another translating service, which could trivially be expanded to include the proofreading mode. So far I haven’t done that mostly because it feels like a cop-out and a bit removed from the original intention of having a small, translation-focused tool with few to none dependencies, and because I just set it up in the rewrite to always expect a (remote or local) base URL. I think aichat does come with the ability to spin up its own web server locally, but it’d be neater if I could avoid needing one. :P

    Another use case this rewrite has enabled (theoretically) is easy comparison of different translation providers. In practice, you’d have to write a short automation wrapping around the script to make it not suck for more complex comparisons. Maybe with another LLM to grade the different providers’ results?

    1 vote