• Activity
  • Votes
  • Comments
  • New
  • All activity
  • Showing only topics in ~comp with the tag "api". Back to normal view / Search all groups
    1. When consuming an API with state rate limits, how should one handle not exceeding them?

      My typical approach is one that I believe is pretty common: Reading the response header for current count and waiting if the limit is reached. However, I am currently working with a couple of APIs...

      My typical approach is one that I believe is pretty common: Reading the response header for current count and waiting if the limit is reached.

      However, I am currently working with a couple of APIs which don't implement that and are currently set up with rate limits on an honesty system.

      Is it a case of throwing sleep statements into you code, or using some kind of "bucket" and "lock" system?

      I'd be interested to see any simple implementation people have used (the simpler the better).

      9 votes
    2. Need help solutioning Microsoft APIM

      We have a backend that kind of does REST APIs but cannot handle simple Bearer tokens for authorization and cannot produce the full set of HTTP error codes (the platform just doesn't allow, for...

      We have a backend that kind of does REST APIs but cannot handle simple Bearer tokens for authorization and cannot produce the full set of HTTP error codes (the platform just doesn't allow, for example HTTP 501 to be returned programmatically). There is no Swagger for the API.

      The thought was to use Microsoft API Management Services as a proxy of sorts. It would handle the Bearer token upfront, and then just proxy / wildcard the requests/responses to the backend. The hard part is that it needs to parse the return response, and if there is something like "{ errorCode: 501 }" property in the JSON, it needs to return HTTP 501 instead of the regular payload.

      Does anyone have any experience in setting this up? It seems like the basic policy processing won't cut it, and so function apps and logic apps seem to be the ticket. We want to keep this facade layer as thin as possible. Microsoft APIM is the only platform we're allowed to consider at this time.

      4 votes
    3. Architecture for untrained software engineers (Python)

      Hey everyone, I've been programming for some time now but notice without any formalized education in CS I often get lost in the weeds when it comes to developing larger applications. I'm familiar...

      Hey everyone,

      I've been programming for some time now but notice without any formalized education in CS I often get lost in the weeds when it comes to developing larger applications. I'm familiar with the principles of TDD and SOLID - which have helped with maintainability - however still feel that I'm lacking in the ability to architect a properly structured system. As an example, I'm currently developing a flask REST API for a website (just for learning purposes). This involves parsing a html response and serializing the result as JSON. I'm still quite unclear as to structuring this sort of thing. If any more experienced developers could point me in the right direction/offer up their opinion I'd be very appreciative. Currently I have something like this (based - I hope correctly? - on uncle bob's clean architecture).

      Firstly, I'm defining the domain model. i.e the structure of the API response. Then, from outside in.

      1. Infrastructure (Flask): User makes request via interface (in my case a request to some endpoint)
      2. Adapters: request object checks if the request is valid (on the way back it checks if the response is valid) - Is this layer only for error handling?
      3. Repository: I'm struggling a bit here, AFAIUI this layer is traditionally a database. In my case however, where the request is valid, is this where I should handle the networking layer? i.e all the requests to return the website source? I'm also confused given at this stage I should be returning the relevant domain model, like an ORM, but as my data is unstructured, in order to do this I need to transform the response first. Where would it be best to handle this?
      4. Use Cases: Here I transform the domain model depending on the request. For example, filter all objects by id. Have I understood this correctly?
      5. Serializers: Encode the domain model as JSON to return from flask route.

      If you got this far, thanks so much for reading. I really hope to hear the opinions of more experienced devs who can steer me in the right direction/correct me should I have misunderstood anything.

      8 votes
    4. Ask Tildes: Design practices for retrieving dozens (or hundreds) of related records over a RESTful API

      I'm looking for some feedback on a feasible mechanism for structuring a few API endpoints where a purely RFC-spec compliant REST API wouldn't suffice. I have an endpoint which returns $child...

      I'm looking for some feedback on a feasible mechanism for structuring a few API endpoints where a purely RFC-spec compliant REST API wouldn't suffice.

      I have an endpoint which returns $child entries for a $parent resource, let's call it: /api/parent/:parentId/children. There could be anywhere from a dozen to several hundred children returned from this call. From here, a child entity is related to a single userOrganization, which itself is a pivoting entity on a single user. The relationship between a child and user is not strictly transitive, but can each child only has one userOrganization which only has one user, so it is trivial to reach a user from a child resource.

      Given this, the data I need for the particular request involves retrieving all user's for a parent. The obvious, and incorrect solution to the problem is to make the request mentioned above, and then iterate through and make an API request to retrieve each user. This is less than very good as this would obviously be up to several hundred API calls.

      There's a few more scalable solutions that could solve this problem, so any input on these ideas is great; but if you have a better proposal that also works, I'm keen to explore that!

      Include user relationships in the call by default.

      This certainly does solve the problem, but it's also pumping down a load of data I don't necessarily need. This would probably 2x the amount of bytes travelling along the wire, and in 8 out of 10 calls, that extra data isn't needed.

      Have a separate /api/parent/:parentId/users call.

      Another option that partially solves the issue: I need data from both the child and the user to format this view, so I'd still need to make the initial call I documented earlier. Semantically, it feels a bit odd to have this as a resource because I don't consider a user to be nested under a parent in terms of database topology.

      Keep the original call, but add a query parameter to fetch the extra data

      This comes across as the 'least worst' idea objectively, in terms of flexibility and design. Through the addition of the query parameter, you could optionally retrieve the relationship's data. This seems brittle and doesn't scale well to other endpoints where it could be useful though.

      Utilize a Stripe expands-style query parameter.

      Stripe implements the ability to retrieve all related records from an API endpoint by specifying the relations as strings. This is essentially the same as the above answer, but is scaled to all available API endpoints. I love this idea, but implementing it in a secure way seems fraught with disaster. For example, this is a multi-tenancied application, and it would be trivial to request userOrganization.user.organizations.users. This would retrieve all other organisations for the user, and their users! This is because my implementation of expands simply utilises the ORM of my choice to perform a database join, and of course the database has no knowledge about application tenancy!


      Now, I do realise this problem could easily be solved by implementing a GraphQL API server, which I have done in the past, but unfortunately time and workload constraints dictate implementing a GraphQL-based solution is infeasible. As much as I like GraphQL, I'm not as proficient in that area as compared to implementing high quality traditional APIs, and the applications I'm working on at the moment are focusing on choosing boring technology, and not using excessive innovation tokens.

      Furthermore, I do consider the conceptuals around REST APIs to be more of an aspirational sliding scale, rather than a well defined physical entity, because let's face it, the majority of popular APIs today aren't REST-compliant, even Stripe's isn't, and it's usually both financially healthier and feature-rich to choose a development path that results in a rough product that can be refined later, than aiming for a perfect initial release. All this said, I don't mind proposals or solutions to my problem that are "good enough". As long as they aren't too hacky! :)

      10 votes
    5. Reddit's redesign has been down all day, however mobile apps work, and old reddit works. Does reddit not use the same public API for the redesign?

      I'm not sure if this is the case for everyone but the new reddit can't load any data, at least for me. However, old.reddit.com works, and all mobile apps seem to work which obviously use the...

      I'm not sure if this is the case for everyone but the new reddit can't load any data, at least for me. However, old.reddit.com works, and all mobile apps seem to work which obviously use the reddit API. I am curious, does reddit have a different version of their API for the redesign, and that's what's been down for hours?

      edit: I know that reddit must allow their own product to do things that other products don't.. Like it seems the chat api is not open to 3rd parties.. but I assumed that they would have just blocked certain api endpoints from public exposure. But based on my blind troubleshooting of this case, it seems that they must be using a totally different interface all together for the redesign?


      edit2: Copy paste of my down-thread comment in case you don't read the whole thread, the context is that I realize that this must not be a global issue.

      Hmm, so I've heard reddit is super-cached... is this possibly a caching fault then?

      reddit uses redis, correct? And it must be sharded, right? So maybe some redis cluster nodes are down?

      I'm trying to learn here, and I am likely asking the wrong questions.. The goal of my post was to understand this type of failure, as I realize that it must be partial as in if all of reddit resign was down, it would be news. If anyone could correct any of my statements or assumptions I would really appreciate it.

      13 votes