metalmoon's recent activity

  1. Comment on Not all porn is created equal - is there such a thing as a healthy pornography? in ~life

    metalmoon
    Link Parent
    Let me preface this by saying I do not think I'm an attractive male. I'm probably a 6/10 at best. However, quite literally every relationship I've been in has been -- at least at the very early...

    Let me preface this by saying I do not think I'm an attractive male. I'm probably a 6/10 at best. However, quite literally every relationship I've been in has been -- at least at the very early stages -- initiated by the woman. Although I guess it depends how you define initiated. I'm defining it this way: sending clear signals of interest like touching or suggesting 1:1 activities (although they've never been described as "dates", per se). Then afterwards I generally "took the lead" in progressing. I'm 40 and have been in 5 long term relationships with women going back to high school in the late 90s that all started this way. So I have lived a very different experience than what's started in OP's thesis and what you agree with.

    8 votes
  2. Comment on Where do you stand on climate change? in ~talk

    metalmoon
    Link Parent
    I know of scientists or people who were on track to be scientists that are disillusioned with the funding and grant process of most modern day research. One of those same people uses that cynicism...

    I know of scientists or people who were on track to be scientists that are disillusioned with the funding and grant process of most modern day research. One of those same people uses that cynicism to doubt that veracity of mainstream climate science. So it's not just a matter of ignorance at play here.

    7 votes
  3. Comment on What are you reading these days? in ~books

    metalmoon
    Link
    I'm reading book one of The Stormlight Archives called The Way of Kings by Brandon Sanderson. I'm only about 20% of the way through the book, but I really enjoy the world building and character...

    I'm reading book one of The Stormlight Archives called The Way of Kings by Brandon Sanderson. I'm only about 20% of the way through the book, but I really enjoy the world building and character development so far.

    6 votes
  4. Comment on I am an extremely light sleeper, and need advice in ~life

    metalmoon
    Link Parent
    Your description of your relationship to caffeine, your history with it, and the effect it has in even small quantities now that you’ve largely eliminated it could have been written by me. I...

    Your description of your relationship to caffeine, your history with it, and the effect it has in even small quantities now that you’ve largely eliminated it could have been written by me. I actually still really miss it even after several months of not using it regularly. But there are so many benefits to not using it regularly that I am usually able to talk myself out of a craving. I’ve had a couple cups of oolong in the morning since quitting, but both times I had objectively lower quality sleeps according to my Fitbit which I use to track my sleep each night. Those two cups were fantastic after a long period of abstinence though. I felt sharp, had a positive impact to my mood, was more social and just generally felt great for the rest of the day. But then I paid the price at night. It really is such a fascinating drug that most people take for granted

    3 votes
  5. Comment on I am an extremely light sleeper, and need advice in ~life

    metalmoon
    Link
    I see a lot of suggestions for white noise through an app, but OP, please try an actual high quality noise machine. Dohm is a great brand of white noise that actually pushes air through the...

    I see a lot of suggestions for white noise through an app, but OP, please try an actual high quality noise machine. Dohm is a great brand of white noise that actually pushes air through the machine to generate white noise. Cheap white noise machines just use a speaker, which results in a much lower quality sound experience that doesn’t muddle outside noise as effectively. I also agree with others about a sleeping mask, it can make a big difference in the quality of sleep for light sleepers.

    My last suggestion may be controversial. If you consume caffeine, try quitting it entirely. Some people (myself included) seem to be affected by caffeine for longer periods of time. You’ll often hear people suggest stopping caffeine consumption 8-12 hours before bedtime. That rule of thumb doesn’t apply to me I’ve found, and any significant amount of caffeine (for me, even one cup of tea), disrupts my ability to fall and stay asleep. After quitting, I’ve found I generally fall asleep and more importantly can fall back to sleep after waking up in the night or early mornings. I encourage anyone with sleep issues to try quitting caffeine entirely just to see if it helps. It’s not advice you generally see anywhere, but has been a game changer for me!

    17 votes
  6. Comment on Google's epic multi-billion dollar ad scam makes sense to us in ~tech

    metalmoon
    Link
    I wouldn't be surprised by this news. I've worked in digital marketing for 15 years and have been a victim of Google Search ad fraud on their Google Search Partners network, which is the same...

    I wouldn't be surprised by this news. I've worked in digital marketing for 15 years and have been a victim of Google Search ad fraud on their Google Search Partners network, which is the same thing as what is being described in this exposè, only for YouTube. LinkedIn and Facebook also have ad partner sites that are rife with fraud. Advertisers are shockingly unaware of much of this fraud in my experience, and all the platforms have recently stopped reimbursing for click fraud even if you have third party tools that prove you've been defrauded. Programmatic ads just as bad too, if not worse in some cases. Best thing advertisers can do is exclude all these search partners and questionable publisher sites where possible, which raises the cost per impression or click, but greatly reduces the exposure to fraud.

    3 votes
  7. Comment on Reddit is Fun, Apollo, BaconReader, and other third-party Reddit apps have officially shut down in ~tech

    metalmoon
    Link Parent
    Sync for Lemmy is in development. I haven't dug into Lemmy yet, but I'll be setting it up once Sync is ready for it.

    Sync for Lemmy is in development. I haven't dug into Lemmy yet, but I'll be setting it up once Sync is ready for it.

    5 votes
  8. Comment on Tips for finding a good landlord? in ~life

    metalmoon
    Link
    I found ours on Zillow and he's been great so far. He owns a few properties and works a 9-5 job, and is just a great, down to earth guy. Haven't been here long enough to see if rent increases or...

    I found ours on Zillow and he's been great so far. He owns a few properties and works a 9-5 job, and is just a great, down to earth guy. Haven't been here long enough to see if rent increases or how much, but I'm willing to bet he'll be fair. So to answer your question, just shop around and ask questions about their rental properties and stick with a small time landlord. That's been my experience anyway

    1 vote
  9. Comment on Piers Morgan Noam Chomsky interview - June 2023 in ~humanities

    metalmoon
    Link
    This is a surprisingly great interview by Piers Morgan of Noam Chomsky, who is now 94 years old. Piers lets Noam speak relatively uninterrupted and they cover a series of contemporary and...

    This is a surprisingly great interview by Piers Morgan of Noam Chomsky, who is now 94 years old. Piers lets Noam speak relatively uninterrupted and they cover a series of contemporary and historical topics in this ~35 minute interview.

    3 votes
  10. Comment on Let's talk Local LLMs - So many questions in ~tech

    metalmoon
    Link
    I am not a programmer so have no experience with Python or any programming language and was able to follow this to get Wizard 30b up and running. It's from this reddit thread, full credit to the...

    I am not a programmer so have no experience with Python or any programming language and was able to follow this to get Wizard 30b up and running. It's from this reddit thread, full credit to the OP on Reddit /u/YearZero. Just pasting here since you're not using Reddit anymore


    Incredibly simple guide to run language models locally on your PC, in 5 simple steps for non-techies.

    TL;DR - follow steps 1 through 5. The rest is optional. Read the intro paragraph tho.

    ChatGPT is a language model. You run it over the cloud. It is censored in many ways. These language models run on your computer, and your conversation with them is totally private. And it's free forever. And many of them are completely uncensored and will talk about anything, no matter how dirty or socially unacceptable, etc. The point is - this is your own personal private ChatGPT (not quite as smart) that will never refuse to discuss ANY topic, and is completely private and local on your machine. And yes it will write code for you too.

    This guide is for Windows (but you can run them on Macs and Linux too).

    1. Create a new folder on your computer.

    2. Go here and download the latest koboldcpp.exe:

    https://github.com/LostRuins/koboldcpp/releases

    As of this writing, the latest version is 1.29

    Stick that file into your new folder.

    1. Go to my leaderboard and pick a model. Click on any link inside the "Scores" tab of the spreadsheet, which takes you to huggingface. Check the Files and versions tab on huggingface and download one of the .bin files.

    Leaderboard spreadsheet that I keep up to date with the latest models:

    https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit?usp=sharing&ouid=102314596465921370523&rtpof=true&sd=true

    Allow me to recommend a good starting model - a 7b parameter model that almost everyone will have the RAM to run:

    guanaco-7B-GGML

    Direct download link: https://huggingface.co/TheBloke/guanaco-7B-GGML/resolve/main/guanaco-7B.ggmlv3.q5_1.bin (needs 7GB ram to run on your computer)

    Here's a great 13 billion parameter model if you have the RAM:

    Nous-Hermes-13B-GGML

    Direct download link: https://huggingface.co/TheBloke/Nous-Hermes-13B-GGML/resolve/main/nous-hermes-13b.ggmlv3.q5_1.bin (needs 12.26 GB of RAM to run on your computer)

    Finally, the best (as of right now) 30 billion parameter model, if you have the RAM:

    WizardLM-30B-GGML

    Direct download link: https://huggingface.co/TheBloke/WizardLM-30B-GGML/resolve/main/wizardlm-30b.ggmlv3.q5_1.bin (needs 27 GB of RAM to run on your computer)

    Put whichever .bin file you downloaded into the same folder as koboldcpp.exe

    1. Technically that's it, just run koboldcpp.exe, and in the Threads put how many cores your CPU has. Check "Streaming Mode" and "Use SmartContext" and click Launch. Point to the model .bin file you downloaded, and voila.

    2. Once it opens your new web browser tab (this is all local, it doesn't go to the internet), click on "Scenarios", select "New Instruct", and click Confirm.

    You're DONE!

    Now just talk to the model like ChatGPT and have fun with it. You have your very own large language model running on your computer, not using internet or some cloud service or anything else. It's yours forever, and it will do your bidding evil laugh. Try saying stuff that go against ChatGPT's "community guidelines" or whatever. Oh yeah - try other models! Explore!

    Now, the rest is for those who'd like to explore a little more.

    For example, if you have an NVIDIA or AMD video card, you can offload some of the model to that video card and it will potentially run MUCH FASTER!

    Here's a very simple way to do it. When you launch koboldcpp.exe, click on "Use OpenBLAS" and choose "Use CLBlast GPU #1". Here it will ask you how many layers you want to offload to the GPU. Try putting 10 for starters and see what happens. If you can still talk to your model, try doing it again and raising the number. Eventually it will fail, and complain about not having enough VRAM (in the black command prompt window that opens up). Great, you've found your maximum layers for that model that your video card can handle, so bring the number down by 1 or 2 again so it doesn't run out of VRAM, and this is your max - for that model size.

    This is very individual because it depends on the size of the model (7b, 13b, or 30b parameters) and how much VRAM your video card has. The more the better. If you have an RTX 4090 or RTX 3090 for example, you have 24 GB vram and you can offload the entire model fully to the video card and have it run incredibly fast.

    The next part is for those who want to go a bit deeper still.

    You can create a .bat file in the same folder for each model that you have. All those parameters that you pick when you ran koboldcpp.exe can be put into the .bat file so you don't have to pick them every time. Each model can have its own .bat file with all the parameters that you like for that model and work with your video card perfectly.

    So you create a file, let's say something like "Kobold-wizardlm-30b.ggmlv3.q5_1.bat"

    Here is what my file has inside:

    title koboldcpp
    :start
    koboldcpp ^
    --model wizardlm-30b.ggmlv3.q5_1.bin ^
    --useclblast 0 0 ^
    --gpulayers 14 ^
    --threads 9 ^
    --smartcontext ^
    --usemirostat 2 0.1 0.1 ^
    --stream ^
    --launch
    pause
    goto start

    Let me explain each line:

    Oh by the way the ^ at the end of each line is just to allow multiple lines. All those lines are supposed to be one big line, but this allows you to split it into individual lines for readability. That's all that does.

    "title" and "start" are not important lol

    koboldcpp ^ - that's the .exe file you're launching.

    --model wizardlm-30b.ggmlv3.q5_1.bin ^ - the name of the model file

    --useclblast 0 0 ^ - enabling ClBlast mode. 0 0 points to your system and your video card. Occasionally it will be different for some people, like 1 0.

    --gpulayers 14 ^ - how many layers you're offloading to the video card

    --threads 9 ^ - how many CPU threads you're giving this model. A good rule of thumb is put how many physical cores your CPU has, but you can play around and see what works best.

    --smartcontext ^ - an efficient/fast way to handle the context (the text you communicate to the model and its replies).

    --usemirostat 2 0.1 0.1 ^ - don't ask, just put it in lol. It has to do with clever sampling of the tokens that the model chooses to respond to your inquiry. Each token is like a tiny piece of text, a bit less than a word, and the model chooses which token should go next like your iphone's text predictor. This is a clever algorithm to help it choose the good ones. Like I said, don't ask, just put it in! That's what she said.

    --stream ^ - this is what allows the text your model responds with to start showing up as it is writing it, rather than waiting for its response to completely finish before it appears on your screen. This way it looks more like ChatGPT.

    --launch - this makes the browser window/tab open automatically when you run the .bat file. Otherwise you'd have to open a tab in your browser yourself and type in "http://localhost:5001/?streaming=1#" as the destination yourself.

    pause

    goto start - don't worry about these, ask ChatGPT if you must, they're not important.

    Ok now the next part is for those who want to go even deeper. You know you like it.

    So when you go to one of the models, like here: https://huggingface.co/TheBloke/Nous-Hermes-13B-GGML/tree/main

    You see a shitload of .bin files. How come there's so many? What are all those q4_0's and q5_1's, etc? Think of those as .jpg, while the original model is a .png. It's a lossy compression method for large language models - otherwise known as "quantization". It's a way to compress the model so it runs on less RAM or VRAM. It takes the weights and quantizes them, so each number which was originally FP16, is now a 4-bit or 5-bit or 6-bit. This makes the model slightly less accurate, but much smaller in size, so it can easily run on your local computer. Which one you pick isn't really vital, it has a bigger impact on your RAM usage and speed of inferencing (interacting with) the model than its accuracy.

    A good rule of thumb is to pick q5_1 for any model's .bin file. When koboldcpp version 1.30 drops, you should pick q5_K_M. It's the new quantization method. This is bleeding edge and stuff is being updated/changed all the time, so if you try this guide in a month.. things might be different again. If you wanna know how the q_whatever compare, you can check the "Model Card" tab on huggingface, like here:

    https://huggingface.co/TheBloke/Nous-Hermes-13B-GGML

    TheBloke is a user who converts the most models into GGML and he always explains what's going on in his model cards because he's great. Buy him a coffee (also in the model card). He needs caffeine to do what he does for free for everybody. ALL DAY EVERY DAY.

    Oh yeah - GGML is just a way to allow the models to run on your CPU (and partly on GPU, optionally). Otherwise they HAVE to run on GPU (video card) only. So the models initially come out for GPU, then someone like TheBloke creates a GGML repo on huggingface (the links with all the .bin files), and this allows koboldcpp to run them (this is a client that runs GGML/CPU versions of models). It allows anyone to run the models regardless of whether they have a good GPU or not. This is how I run them, and it allows you to run REALLY GOOD big models, all you need is enough RAM. RAM is cheap. Video cards like RTX 4090 are stupid expensive right now.

    Ok this is the gist.

    As always check out /r/LocalLLaMA/ for a dedicated community who is quite frankly obsessed with local models and they help each other figure all this out and find different ways to run them, etc. You can go much deeper than the depths we have already plumbed in this guide. There's more to learn, and basically it involves better understanding what these models are, how they work, how to run them using other methods (besides koboldcpp), what kind of bleeding edge progress is being made for local large language models that run on your machine, etc. There's tons of cool research and studies being done. We need more open source stuff like this to compete with OpenAI, Microsoft, etc. There's a whole community working on it for all our benefit.

    I hope you find this helpful - it really is very easy, no code required, don't even have to install anything. But if you are comfortable with google colab, with pip installs, know your way around github, and other python-based stuff for example, well those options are there for you as well, and they open up other possibilities - like having the models interact with your local files, or create agents with the models so they all talk to each other with their own goals and personalities, etc.

    2 votes
  11. Comment on What was the last event that significantly improved your life? in ~talk

    metalmoon
    Link Parent
    I drank about the same amount as you do, although I only ever drank it in the morning. I thought I was a light sleeper too, and suppose I still am. But I'm finding that I'm sleeping much more...

    I drank about the same amount as you do, although I only ever drank it in the morning. I thought I was a light sleeper too, and suppose I still am. But I'm finding that I'm sleeping much more soundly through the night than I was when drinking coffee, and when I do wake up I fall back to sleep easier. Here's a good article on the guy who inspired me to try it, it's worth a read if you're interested! https://news.harvard.edu/gazette/story/2020/08/author-michael-pollan-discusses-how-caffeine-changed-the-world/

    1 vote
  12. Comment on What was the last event that significantly improved your life? in ~talk

    metalmoon
    Link Parent
    I typically drank two or three 8 oz. cups of coffee in the morning

    I typically drank two or three 8 oz. cups of coffee in the morning

    1 vote
  13. Comment on What was the last event that significantly improved your life? in ~talk

    metalmoon
    Link Parent
    I was really worried about the headaches too because I also suffer from migraines and was worried I'd trigger one from quitting. I did a pretty careful taper down, eventually using Nespresso...

    I was really worried about the headaches too because I also suffer from migraines and was worried I'd trigger one from quitting. I did a pretty careful taper down, eventually using Nespresso coffee pods to be very specific in my dosage down on the taper. I got some decaf ones to mix in when I started getting down to really low levels. It worked pretty well. Just some very minor headache for a few days.

  14. Comment on What was the last event that significantly improved your life? in ~talk

    metalmoon
    Link Parent
    Good suggestion. I haven't really looked to see if my morning energy is affected by my breakfast, but I'll monitor it more now to see what impact it has.

    Good suggestion. I haven't really looked to see if my morning energy is affected by my breakfast, but I'll monitor it more now to see what impact it has.

    1 vote
  15. Comment on What was the last event that significantly improved your life? in ~talk

    metalmoon
    Link
    For me, it was quitting caffeine. For the past five years or so, I wasn't sleeping very well. I thought it was because of job stress and a recently adopted cat right before the pandemic. I had...

    For me, it was quitting caffeine. For the past five years or so, I wasn't sleeping very well. I thought it was because of job stress and a recently adopted cat right before the pandemic. I had been drinking coffee regularly for about twenty years, so it never occurred to me that it might be related, even though I always stopped drinking coffee by the early afternoon because I recognized it impacted my sleep if I drank it beyond that time. I was inspired to try quitting caffeine altogether after hearing an interview with Michael Pollan for his most recent book on psychedelic drugs. He wanted to investigate his relationship with caffeine, so he decided to try eliminating it completely. He experiences some downsides to quitting, but one of the positives was that he said he started sleeping like a baby. So I decided to try it out, and lo and behold, my sleep very quickly improved. Where before I was regularly waking up (and staying up) in the middle of the night, now I'm usually able to quickly fall back to sleep. I'm often now sleeping in until my partner wakes up, which was completely unheard of before. I'm remember my dreams more often and my cat's nighttime antics no longer affect me the way they did before. Interestingly, one other effect I've noticed is that while my early morning energy levels are reduced, I have much more sustained energy throughout the day, particularly in the afternoon where before I was often crashing and generally useless past 1 or 2 pm. The worst part is the mornings though. It's really difficult to jump out of bed and into my day. I miss the ability to dial up my energy levels, but the higher quality sleep is really fantastic. I don't think I'll go back to regularly consuming caffeine, although I may experiment occasionally consuming it after I've continued this complete abstinence project for a few more months.

    10 votes
  16. Comment on They plugged GPT-4 into Minecraft – and unearthed new potential for AI in ~games

    metalmoon
    Link Parent
    I'm witnessing AI replacing people in small pieces already at my job. I'm in marketing and we had an agency develop a Microsoft Excel Online spreadsheet and script for generating tracking URLs for...

    I'm witnessing AI replacing people in small pieces already at my job. I'm in marketing and we had an agency develop a Microsoft Excel Online spreadsheet and script for generating tracking URLs for our ads. The generator needed updating and instead of going back to the agency to do the work, we just asked ChatGPT to do the code updates, and it worked great. It was a small job, but it clearly took it away from a human in that instance, and I could easily see that happening more and more frequently as companies begin to recognize the possibilities and the tools for enabling it become more widespread.

    5 votes