19 votes

So you think you know C?

6 comments

  1. [2]
    unknown user
    Link
    Interesting that the answer is D for all the questions... Thanks for posting this!

    Interesting that the answer is D for all the questions...

    Thanks for posting this!

    4 votes
    1. mrbig
      Link Parent
      That's what you get when you try to be as random as possible :P

      That's what you get when you try to be as random as possible :P

      1 vote
  2. Eric_the_Cerise
    Link
    Pishaw. This is nothing. If you actually think you know C, check out The Underhanded C Contest ( http://underhanded-c.org/ ) ( Slogan: "The official perfectly innocent web page for law-abiding...

    Pishaw. This is nothing. If you actually think you know C, check out The Underhanded C Contest ( http://underhanded-c.org/ ) ( Slogan: "The official perfectly innocent web page for law-abiding good guys." )

    3 votes
  3. [3]
    Arbybear
    Link
    What was the reasoning behind allowing all of these things to be implementation-specific? Does C++ define some of this stuff or is it the same?

    What was the reasoning behind allowing all of these things to be implementation-specific? Does C++ define some of this stuff or is it the same?

    1 vote
    1. unknown user
      Link Parent
      One possible reason is making sure the legacy stuff keeps compiling. C appeared in 1973 and the first standard, ANSI C, only appeared in 1989. And even after that a lot of implementations were...

      One possible reason is making sure the legacy stuff keeps compiling. C appeared in 1973 and the first standard, ANSI C, only appeared in 1989. And even after that a lot of implementations were (and probably still are) non-conforming.

      The other is supporting all those weird architectures with 9-bit chars and 36-bit words, that were still in use in the 70s and 80s.

      IIRC, not only does C++ not define these things, it also adds hundreds of even more undefined things. And as a cherry on top, C++ defines some of the things C defined differently from C.

      8 votes
    2. BossHogg2020
      Link Parent
      Most of the cases shown there are just about using types which are defined as being at least N bits long. So you know it is sufficient to hold the numbers you want it to hold, and as you don't...

      What was the reasoning behind allowing all of these things to be implementation-specific?

      Most of the cases shown there are just about using types which are defined as being at least N bits long. So you know it is sufficient to hold the numbers you want it to hold, and as you don't want to bother knowing for which architectures a small type is good fit and for which architectures the smallest type is not such a good fit because it differs from the 'natural' type which is longer and memory accesses have to be aligned on that longer type anyway, you let the platform compiler use a longer type than strictly needed if it wants to, instead of writing complicated and fragile Makefiles or preprocessor macros yourself.

      But if you wish, you can as well use types which are defined as having an exact length: int32_t, uint16_t, etc. It has been standardised in C99, thus for 20 years.

      2 votes