Migrating APIs to Tyk

bitsofinfo
10 min readJun 28, 2018

In the recent past I was helping on a project who’s objective was to migrate off of a very costly proprietary appliance based API gateway solution, and move into a cheaper alternative; preferably an open source api gateway offering. The main challenge was that a large percentage of the APIs to be migrated were legacy SOAP, in addition to a slew of REST ones. The amount of traffic received on a daily basis for these APIs was nothing massive, and the APIs were primarily used by other internal teams with some externals as well. (i.e. “self service” gateway features were not important).

The general requirements?

  1. Open source
  2. Extensible via in house skill sets
  3. Should have the option to run in a SaaS/Cloud hosted solution or on premises
  4. Option of commercial support
  5. Should have some sort of administrative dashboard and basic analytics
  6. Much…. much cheaper than the solution that was currently being payed for
  7. Our existing APIs and their various security contracts must remain intact post migration to a replacement; with or without some custom development.

The REST APIs, mainly secured via OAuth2 were not a concern, however the legacy SOAP APIs would prove to be more challenging as some of them were protected via a custom legacy auth mechanism based around TLS, a non-standard authentication header, and deep inspection of payloads to formulate a custom authn/z checks against a secondary authentication system. The latter of which was enabled easily via the legacy “point and click” directive driven API gateway appliance solution that was in place for many years….. and cost an arm and a leg….which was the point of this project; replace it!

The first part of this project was evaluating the array of offerings out there at the time (this was ~12 months ago), and after eliminating several of the projects out there due to being promising but too new and not widely adopted (i.e. gravitee.io) or robust looking but too old (think apiman..) so… the field really came down to Kong and Tyk

Kong

Kong has been around forever, you see them at all the conferences and is pretty much the household name in the world of open source API gateways so this was the first one looked at. Kong is written in Lua.

Note at the time of this writing, this evaluation took place ~12 months ago and I’m sure some thing have improved in the Kong ecosystem.

I’ll keep this short but the experience of installing and getting Kong up and running locally on a laptop simply to do some basic testing was extremely clunky and I encountered several errors during the process with the surrounding the database back end plugins (Postgres & Cassandra). This was done both with and without Docker. Once the gateways were up for testing other than presenting the most basic API I came to a standstill quite quickly as my requirements for implementing the custom SOAP auth were quickly leading me down the road to Lua. Sure our development team could pick this up, but it wasn’t on the team’s skill set. The thought of trying to both get folks up to speed on Lua and developing the proper functionality we needed was not a good fit. Secondly the DevOps team skills and exiting infrastructure were heavily invested in MySql, Redis, MongoDB (not Cassandra nor Postgres)

Kong (like many others) is fully manageable via REST apis, which is great, however we needed some level of an administrative GUI for the team which didn’t have the time to script out the management functions they needed to do on a daily basis. Keep in mind this was to replace an commercial API gateway platform that came with an administrative UI, so this was an requirement. In that light; I was simply not impressed with the available Kong administrative interfaces at the time (all of which were other 3rd party open-source projects) and all of which simply presented extremely basic shells around Kong’s administrative APIs (think simple key-value pair editors). Lastly, the enterprise offering available through Mashape at the time also appeared to be quite immature after talking with them about it. The costs for the enterprise dashboard/interface was extremely expensive and frankly not much more functional than what was available in the 3rd party open source implementations already out there for free.

Given all of the above, Kong really was a non-starter for what I needed to do. It seemed a bit basic out of the box, and other than the gateway itself, I felt like the overall package and architecture enterprise offerings presented by Mashape (at the time) was a work of progress still in the early stages. I know a ton of people use Kong and love it, this is not to knock it per-say, but it simply did not fit this use-case’s particular needs nor was it a good match for the team from a development and DevOps standpoint.

Tyk

The other candidate on the block was Tyk. Tyk is written in Go.

Tyk, like Kong, provides an API gateway offering that is fully open-source and completely manageable via REST (keys/sessions are stored in Redis). On top of that is the dashboard (closed source) for managing gateways and viewing analytics (mongodb), and tyk pump for moving gateway analytics into mongodb for analysis in the dashboard.

In short Tyk was impressive, I downloaded their Docker quick start compose file and in a few short commands had a functioning API gateway up and running, including a decent administrative GUI complete with some analytics. I was able to add an API (via the GUI) with some url rewriting + basic auth quite easily and it actually worked the first time.

So all of this out of the box was free (with only one gateway) for you to run and manage on your own, but like every other OSS company out there, Tyk has various paid for plans where you can have it all run on the cloud, fully on premises and a mixed approach (hybrid), where the dashboard/mongo side of things are in the tyk managed cloud while the API gateways run in your datacenter. The prices seemed much more reasonable.

Ok… great. This passed my first test. Next was how could I extend this do tackle the most challenging part of this migration: writing plugins to deep inspect payloads and relay the authentication piece to an out of band custom authentication and authorization service…. before letting Tyk proxy anything.

Tyk Plugins

IMHO this is where the rubber hits the road for any piece of software: extensibility. This is where Tyk shines and started getting me excited about actually being able to implement what I needed to do. Tyk supports two primary kinds of “plugins”, the first are in-process JSVM plugins (golang javascript virtual machine via Otto) and the second out of process “rich plugins” that you can implement via (Lua, Python or gPRC)… which pretty much means you can extend Tyk in any language you want. This is huge.

For my evaluation test I just experimented with coding a Javascript middleware plugin. For my test I wanted to do something very basic: see if I could just relay basic auth from an original request to another simple out of band REST api on a completely separate endpoint that I coded in NodeJs. My real use-case wasn’t basic auth, but if I could make this simple test work, I had enough to know I could move forward with more prototyping.

myTest.js simple plugin sample (abbreviated):

log("myTest pre-middleware plugin test initializing");

function base64Decode(data) {
// some b64 decode function here
}

function sessionIsNotExpired(session) {
// some code to determine if session is expired
}

// create a JSVM middleware object
var myTest = new TykJS.TykMiddleware.NewMiddleware({});

// provide the impl that will be called per invocation
// of any API def that uses this plugin
myTest.NewProcessRequest(function(request, session, config) {

// grab the raw Authorization header and get the username/pw
var rawAuthorization = request.Headers["Authorization"][0];
var decodedUnamePwPair = base64Decode(rawAuthorization.split(" "));
var parts = rawDecodedUnamePwPair.split(":");
uname = parts[0].trim();
pw = parts[1].trim();

// grab our out of band auth backend fqdn we will do the auth check against
var myAuthHost = config.config_data.my_test.my_auth_host;

// a unique session key for tyk
var mySessionKey = "....something-unique-to-this-request-and-or-user";

// Set the Auth token header for Tyk's downstream auth module
// Note your API def's "Authentication Mode" should be set to "Auth Token"
// AND the "auth key header name" to "X-AT-Auth"
request.SetHeaders = { "X-AT-Auth" : mySessionKey };

// lets attempt to get any pre-existing tyk session for this key
var tykSession = JSON.parse(TykGetKeyData(mySessionKey))

// is the session legit and active? we have nothing to do
// let the request through
if (sessionIsNotExpired(tykSession)) {
return myTest.ReturnData(request,{});
}

// otherwise relay the auth check to another system
var authRequest = {
"Method": "GET",
"Body": "",
"Headers": {
"x-my-username":uname,
"x-my-password":pw,
},
"Domain": ("https://"+myAuthHost),
"Resource": "/my-auth-check"
};

// execute it
var response = JSON.parse(TykMakeHttpRequest(JSON.stringify(authRequest)));

// if not a 200, return a 401
if (response.Code != 200) {
request.ReturnOverrides.ResponseCode = 401;
request.ReturnOverrides.ResponseError = 'MyTest Auth Relay Failed: ' + respObj.Code;
return myTest.ReturnData(request, {});
}

// otherwise we are good, create a new Tyk session and store it
var newSessionString = "{\"expires\": "SOMETHING...", "+
....
"}";
TykSetKeyData(mySessionKey, newSessionString);

// return and let the proxying continue
return myTest.ReturnData(request,{});

});

log("myTest pre-middleware test initialized");

I simply took the above javascript file, shelled into the gateway container and placed it in the middlware directory, then referenced it by name in the API def. Restarted the gateway and it was loaded. Sent a single request through and the above test worked fine and provided the confidence to taking this idea much much further.

Eventually a more robust prototype was created to prove the possibility of migrating of these legacy SOAP APIs via several plugins and chaining them together in a much more complex, but decoupled fashion where plugins exchanged data with other downstream plugins via custom headers (or upstream, whatever you prefer to call it….always confusing..). It ended up looking roughly like the below.

  • Take a request, containing a non-standard authentication header, evaluate the request’s to determine which endpoint to auth against and ultimately where Tyk should proxy the API request to.
  • Inspect a SOAP body for specific meta-data and extract it
  • Take all of the above information, invoke an external authentication service and proxy to a custom backend; a backend unique for the individual request.
  • All through a single API definition
  • All plugins are configured via config data to customize behavior
  • All plugins are deployed as a bundle.
  • The plugins can be used independently or re-used for other chains

Could the above been done in Kong? Probably, but it would have had to be developed and maintained it in Lua, it just wasn’t a good fit for this use-case. Pretty much any developer you hire off the street has some experience with Javascript.

Some Tyk JSVM plugin coding tips:

  • Use custom HTTP request headers to convey information to between plugins. Simply use headers as plugin parameters and plugin “return” values can be additional headers for the next consumer.
  • Be certain to always force those headers to some value (to avoid injection from the outside, unless you have a limited use-case for that)
  • Optionally wipe your custom headers that you use within your plugins for exchanging data prior to letting Tyk takeover and proxy your request forward.
  • Want to write one function and re-use it from several different JS plugin files? Simply declare all your JS functions in a myinit.js (or whatever) and be sure its declared first in your custom_middleware.[pre|post] section in the bundle manifest; all functions, regardless of plugin file are placed into a shared global namespace in the otto interpreter. See here for more info.
  • Take advantage of your API def’s config_data section, you can use this to convey configuration to your chain of plugins… and since your API definitions in Tyk are completely manageable via APIs, you can dynamically update your plugin configs pretty easily from other sources. When my bundle is made up of more than one plugin, as a convention I structure each plugin’s config as config_data.[plugin-name].{...}
  • Package your plugins as bundles and pull them from a bundle download server. This way you can hot update your plugins on your live api definitions and revert pretty quickly as well.

Summary

That said, this wasn’t a completely cake walk by any means. Throughout the development and migration I ran into many…many issues and it wasn’t perfect…. the end result that is running today is the result of a fair amount of development and back and forth w/ the Tyk community/team to get issues fixed and workarounds brainstormed; and as a result I think the JSVM plugin engine is much more robust that even when I initially started developing in it. The Tyk team and community was ridiculously responsive and helpful. More so than any other open source project I’ve interacted with, this includes actively addressing issues submitted to github .…. AND actually getting them fixed in releases in a timely manner….. AND this was all prior to any hint of a sales inquiry even happened as I had not yet recommended it to the team.

All in all, I dig Tyk. The architecture is solid, well designed, extremely extensible plugin architecture with unlimited language options and finally; its loosely coupled yet very cohesive in implementation. These are all good things. Most importantly it works great and the team behind seems to care about making their project great and listening to the non-paying community that uses it.

Check it out –> https://github.com/TykTechnologies/tyk

Originally published at bitsofinfo.wordpress.com on June 28, 2018.

--

--