I have been coding a new protX interface for a client this week for processing credit card payments. Bizarrely out of the blue today my code stopped working. Even though nothing had changed on my end, any attempt to send a transaction through to the protX server resulted in a response of "connection failure". I switched my code to post to the live server instead of the the test server, and suddenly I got a response back, so it definitely wasn't a connection problem my end and my code was still working. So I gave protX support a call (actually I opened a ticket first, but they never replied, and still haven't 12 hours later), and their support guy assured me that nothing had changed on their end and that nothing was broken, so it must be a problem with my code. Which is actually quite ironic as it is usually me saying this to customers who are saying "My code is fine it must be your server". So as this conversation was clearly going no further it was time to put on my Sherlock Holmes hat and start investigating.
One thing the support guy had pointed out is that my transactions were coming through, so I logged into my VSPadmin and there indeed were my transactions, so why was I getting a "connection failure" in my CF page?
So next I tried a plain old HTML form, which posted directly to the protX gateway, and this worked and I got back the expected response, so it seemed the problem was only affecting ColdFusion pages and my CFHTTP call, now that is weird I thought.
Next I checked the HTTP response headers from the live and test servers, being as I had already discovered that posts against the live server were still working, there had to some difference between the two.
Live showed
HTTP/1.1 200 OK Date: Tue, 09 Dec 2008 23:40:33 GMT Content-Language: en-GB Content-Length: 312 Server: Microsoft-IIS/6.0
Test showed
HTTP/1.1 200 OK Vary: Accept-Encoding Date: Tue, 09 Dec 2008 23:41:49 GMT Content-Language: en-GB Server: Microsoft-IIS/6.0 X-Powered-By: ASP.NET
The main difference here being the "vary: Accept-Encoding" part. A bit of googling on this header told me it is to do with http compression. So it seemed that protX had turned on http compression on their test server without telling anyone. So I added the following to my CFHTTP call.
<cfhttpparam type="Header" name="Accept-Encoding" value="deflate;q=0">
And lo and behold, things started working again. So my previous assumption was correct and while browsers may know how to decompress gzipped or deflate encoded content by default , ColdFusion does not it seems.
If you send the Accept-Encoding HTTP header, then httpZip (or any other compression solution) should be responding with the first compression scheme specified by that header's value. So, in other words, if the cfhttp call is sending:
Accept-Encoding: gzip, deflate
Then the server running httpZip should be responding with gzip-encoded data (which would be accompanied by the HTTP header "Content-Encoding: gzip"). If, on the other hand, the cfhttp call is sending:
Accept-Encoding: deflate
Or
Accept-Encoding: deflate, gzip
Then the response from the httpZip-enabled server should be deflate-encoded data (which would be signaled by the HTTP header "Content-Encoding: deflate").
Now i say should be, but in this case it is not, as I still get the same response back from the protX server regardless, even though it is taking notice of the new header in my request.
I certainly hope protX are not planning to enable http compression on their live server without warning, otherwise they may have a lot of very pissed off customers with broken shopping carts.
So you may want to set the above header in your CFHTTP calls by default just in case, it wont have any affect if there no active http compression, but may save your ass if it gets enabled in the future.
Recent Comments