Author Archives: Dumitru

Tales from the factory – curl vs wget

open_book_02

 

From time to time I will share some stories based on true events, maybe someone will learn something from them. Then again, maybe not. To protect the innocent, some names and events might be edited. Here comes the first one.

 

Someone raised a ticket that their application cannot access a certain url, let’s say “http://My.url.tld”. You dutifully log in to the system in question and try to access the url. Since the app is using the “libcurl” library, you naturally try to test with the respective utility. You confirm that it does not work:

[user@someserver ~]$ curl http://My.url.tld
Error message.
[user@someserver ~]$

In the same time a colleague also sees the ticket but for some reason he does the testing by the way of “wget”. It’s working for him:

[user@someserver ~]$ wget http://My.url.tld
Correct result.
[user@someserver ~]$

You go back and forth with “it’s working”, “no, it’s not” messages until both of you realize that you test differently. So, it’s working with “wget” but not with “curl”. Baffling. What could be wrong ?

After running both utils in debug mode you spot a minute difference:

[user@someserver ~]$ curl -v http://My.url.tld
* About to connect() to My.url.tld port 80 (#0)
* Trying 1.1.1.1... connected
* Connected to My.url.tld (1.1.1.1) port 80 (#0)
> GET / HTTP/1.1
> User-Agent: curl
> Host: My.url.tld:80
> Accept: */*
>
<
< Error message
<
<
* Connection #0 to host My.url.tld left intact
* Closing connection #0
[user@someserver ~]$

[user@someserver ~]$ wget -d http://My.url.tld
DEBUG output created by Wget 1.12 on linux-gnu.
Resolving My.url.tld... 1.1.1.1
Caching My.url.tld => 1.1.1.1
Connecting to My.url.tld|1.1.1.1|:80... connected.
Created socket 3.
Releasing 0x000000000074fb60 (new refcount 1).

---request begin---
GET / HTTP/1.0
User-Agent: Wget (linux-gnu)
Accept: */*
Host: my.url.tld:80
Connection: Keep-Alive

---request end---
HTTP request sent, awaiting response...
---response begin---
HTTP/1.1 200 OK

Correct answer

---response end---
200 OK
Registered socket 3 for persistent reuse.
Length: 242 [text/xml]
Saving to: “filename”

100%[=========================================================================================================================================================================>] 242 --.-K/s in 0s

“filename” saved [242/242]
[user@someserver ~]$

Have you seen it ?

.
.
.
.
.
(suspense drumroll)
.
.
.
.
.
.
.

Turns out that wget is doing the equivalent of a tolower(“url”) so in the actual http request it’s sending “Host: my.url.tld” and curl it’s just taking what I specified in the command line, namely “Host: My.url.tld”. Taking the test test further it turns out that calling curl with the “only lowercase” url is producing the expected results (i.e. working).

I know what you are thinking, it should not matter how you call an hostname. True. Except that in this story there is a load balancer in the way, who tries (and mostly succeeds) to do smart stuff. Well, it turns out that there was an host-based string match in that load balancer that did not quite matched the mixed-case cases.

But a question remains. What is the correct behavior ? The “curl” or the “wget” one ? I lean on the “curl” approach but maybe I am biased. What do you think ?

Winds of change

terminal

 

Three months have past since I did an (rather abrupt) shift in the focus of my career. Specifically from net to sys. I’ve learned a lot since then and I encountered a very different set of challenges than in the last 15 years. Fun.

 

Unfortunately that also meant that I did not have too much time to cater for this page. I finally managed to put all the things in order in my mind and chill a bit. So from now on I will talk less about networking stuff and more about systems stuff.

To signal this I decided to reflect the change also in the name and subtitle of this site.

So, goodbye packets, say hello to processes.

Later edit:
After two and a half years I’m back to net. ‘Nuff said.

Magic

Magic Smoke!For some reason some people (usually the clueless, but not always) assume that if you just buy and install some $DEVICE the problem at hand will go away without any other issue.

How ? Magic!

That’s why the big vendors make the big money. People thinking that just throwing money at the problem will magically make the problem go away. It’s not always someone there to point out that this is not the case or if it is his, his or her opinions are ignored.

Let’s be clear on one thing: the only magic present in $DEVICE is the magic smoke. When the magic smoke goes out, the $DEVICE stop working.

 

SDN

Software-Defined Network. From Wikipedia: “Software-defined networking (SDN) is an approach to computer networking that allows network administrators to manage network services through abstraction of lower level functionality. This is done by decoupling the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The inventors and vendors of these systems claim that this simplifies networking.”, “Software-Defined Networking (SDN) is an emerging architecture purporting to be dynamic, manageable, cost-effective, and adaptable, seeking to be suitable for the high-bandwidth, dynamic nature of today’s applications. SDN architectures decouple network control and forwarding functions, enabling network control to become directly programmable and the underlying infrastructure to be abstracted from applications and network services.”

Wow, I almost completed my bullshit-bingo card.

Will the network engineer become obsolete ? Decoupling the data plane from the control plane sound like a interesting ideea but right now SDN only exists in the marketing plane.

Don’t get me wrong, I like the ideea, I just don’t think it would be necessarly a “Good Ideea”. SOHO ? Sure, why not. Datacenter ? Could work since you also (mostly) control the traffic source. But ISP/Carrier ? I would not like to see the day the controller decides to reroute some traffic and in the process becomes isolated from the network or even better it isolates some remote node.

But I guess you can always have the excuse “the computer says so”.

 

 

Packetdam

A premiere here, a rant-free post.

Just a bit of free advertising for a nice product. I’ve been using it for a while now and the only gripe I have with it is that the developer does not advertise it more.

In order to be able to respond to DDoS attacks in a timely manner you need to be able to detect them as quick as possible. If you ever need a really, and I mean really fast DDoS detection engine, try Packetdam (www.packetdam.com). No matter how much other vendors who wanted to sell us something tried to compete with it, they always failed at detection speed.

Don’t take my word for it, go grab a evaluation build and test it. If you have questions the supplier is nice and quick to answer.

 

(In)sane defaults

Because SUP720-3BXL and because all who have a full BGP feed with it should have seen this in the last days:
*%MLSCEF-SP-4-FIB_EXCEPTION_THRESHOLD: Hardware CEF entry usage is at
95% capacity for IPv4 unicast protocol*

Because 512k routes should be enough for everyone (although the specs advertise 1024k).

Here you will find a workaround:

http://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/117712-problemsolution-cat6500-00.html

Ofc, reload is needed 🙁

 

Video killed

“Video killed the radio star” is the name of a song that was popular at the beginning of the ’80s. To put it into perspective MTV started broadcasting with this song.

Well, in my opinion video has yet another victim. The Internet as we knew it. Goodbye net neutrality, it was fun while it lasted.

RIP eigrp

All routers bow to your new overlord, ISIS.

Well, since a certain vendor has not tried hard enough to ensure that its newfangled router software plays nice with its older brothers (“stuck in active!”) I decided that it’s time to migrate all the legacy network that was still using eigrp to isis.

Why isis and not ospf you might ask ? Among others because I decided that running two different instances (ospf2 and ospf3) of an IGP to cater both IPv4 and IPv6 is not worth the hassle.

Oh and to keep on the ranting side, of course that the new and the old software has different defaults for isis when it comes to IPv6 (I must admit that the new one is saner thou). Fun.