12
votes
Fortnightly Programming Q&A Thread
General Programming Q&A thread! Ask any questions about programming, answer the questions of other users, or post suggestions for future threads.
Don't forget to format your code using the triple backticks or tildes:
Here is my schema:
```sql
CREATE TABLE article_to_warehouse (
article_id INTEGER
, warehouse_id INTEGER
)
;
```
How do I add a `UNIQUE` constraint?
I'm a bit late but there's an oddly specific question I have around Oauth2 or rather the open_id part I think, which I cant seem to find a clear answer on.
When you get an access token and an id_token back from the auth server such as google.
Do you need to use both the access token and the id_token.
Can you validate that the user is logged in with just the id_token?
Or can you only use that one to get information about the user.
So, I want to go through this little tutorial on a full-text search engine and try to convert it to equivalent(ish) Rust code. Very early in, and have gotten to the point where I can decode a gzipped file via (
GzDecoder
)[https://docs.rs/flate2/1.0.20/flate2/read/struct.GzDecoder.html] ... however, when I use theread_to_string()
method as in the example, it seems to until it encounters the end-of-line. The docs forread_to_string()
say it should read to the end-of-file, though. So... I'm a little confused as to why it's not reading the entire file. Then again, the file is ~6GB and I'm not sure it could even fit into a RustString
... 🤔Anyhow, here's what I got:
Of course, I haven't gotten around to actually trying to implement the parser, yet...
Does the file have the end of line markers your language expects? Is it new line? Carriage Return/Line Feed? Something else?
When I print the string in debug mode I get
<feed>\n
, so I believe it is definitely reading the new line and cutting off after that. Butf
andgz
have many more bytes in 'em.f
is aVec<u8>
and it's returning a length of796157717
, so... ~796MB. and when I print outgz
it displays an ongoing list of bytes, so I can only assume thatGzDecoder::new()
is functioning properly.. which leads me to believe thatread_to_string()
isn't doing what I think it should be.That's odd.
Does the same thing happen if you slice
f
to be smaller?Is
gz.read_to_string
definitelystd::io::Read::read_to_string
?Does
gz
have more bytes available after theread_to_string
call?Is the GzDecoder returning some
io::ErrorKind
early in its decoding?I'm getting stuck on a CSS issue.
I have a 3 column layout that has a sidebar, main content and table of content.
My problem is that the main column has a max width of 65ch. I'd really like it if that column to filled as much of the parent as possible but not go above 65ch. Right now it will take as little horizontal space as it needs and nothing more, which looks weird when switching pages since the left alignment of the main content will be different. I want my layout to be responsive though so that no matter what I never get an horizontal scroll bar.
I also want the
Edit this page on GitHub
to remain right aligned relative to the main content which is why I used a grid layout as their parent.Hopefully that made sense. I wish I could do a simple
width: 100%;
but that didn't work.TLDR: I want pages with almost no main content to have a layout like the other pages with more content.
Edit: Looks like I managed to figure it out. I moved the edit link back with the main content. After that I used a flex layout instead of grid and I applied
flex-grow: 1;
to the main content and it seems to have done the trick.I have a question that might elicit some good suggestions here. I am writing a scientific simulation, and in order to optimize it I would like to get a better understanding of how memory in C++ works and the best ways to optimize load orders.
Basically, I am working with relatively large amounts of data (arrays that are ~100 GB, on the lower end). I frequently have to shift this data around to different buffers in order to do some stuff to it, like doing Fourier transforms or computing other things related to this data and manipulating that output, as well as writing this data to files at various stages. Since the amount of data is so large, I want to get a better grasp of how it is being stored in memory, and how the operations I do on it might be optimized by limiting how often I am needlessly copying this data from one buffer to another.
Does anyone have resources on understanding how this works on a somewhat fundamental level? I realize that this is an incredibly broad topic and my question may be unclear or not even well posed - that's a reflection of the fact that I really don't know a lot about what I am asking. I am happy to clarify if there are unclear things here.
What's the dimensionality of your arrays and how are they stored? For example, if you have a 2 dimensional array, it might be tempting to use a std::vector of std::vector of whatever data. But that can be bad for data locality as the inner vectors can be in very different places in memory. CPUs these days generally work best when the data you need to work with is all near each other. So a single std::vector that's the size you need is better in many cases.
As always, it's better to measure than to guess. You can use a profiler to see where your program is spending most of its time, and then optimize that part of your program.
I'm assuming this is happening on a machine with huge amounts of memory?
Are you asking for help on the level of 'I don't know much about C++, so I don't understand references and/or pointers', or higher? (E.g. are you used to C and similar languages, or higher level things like Java that manage your memory for you?)
If it's at that level, the most basic advice is to take a const reference whenever you need the data, but are not changing it, and use a non-const reference if you are changing it, but don't need to leave an unchanged copy.
I realize this advice may not be super legible if you're not familiar, but I can elaborate.
How the memory works depends on what data structure you're using, but for std::arrays and std::vectors, all the data will be stored continuously in memory (and probably non standard library array types too). If you don't pass this data to functions, etc. as either a reference or pointer, you will be copying all of it.
For higher level stuff, if the computer running this simulation runs Linux or OSX, you can use valgrind (massif in particular) to profile memory/heap usage.
This post is a bit wandering and doesn't feel particularly helpful, but I think you expected that, so if anything here grabs on to you, let me know and I can probably be more helpful.