Category Archives: Web Programming

Notes that discuss my html5, jquery, php, based web programming

RSS Error: WP HTTP Error: name lookup timed out — Hosed up DNS settings

Background on the wordpress dashboard error message: “RSS Error: WP HTTP Error: name look-up timed out” — out of nowhere I got this error message on the dashboard for my wordpress blog and my pages were displaying very slowly, especially the widgets on the right hand column, even after updating my WooCommerce Recurring Payments.

It turns out that my server (it’s a fedora linux system  running in my basement, I am the system admin) required a change to the DNS settings. I knew these changes were going to be needed, but I wasn’t sure when and this error was the first symptom that told me I should have planned ahead a little better.

I was changing ISPs and my home network DNS settings needed to change from 192.168.100.155 to 192.168.100.164.

  • I edited my /etc/resolv.conf file,
  • I edited the files under /etc/sysconfig, and then
  • I restarted my web server.

And everything was fine. But it wasn’t as easy as it sounds. Here’s some notes for my reference and the potential benefit of others:

/etc/resolv.conf

For a quick test, go to a command line and try to ping yahoo.com. This didn’t work for me. My old ISP’s DNS server had gone offline. This was expected, and I need to update my settings.

The file /etc/resolv.conf is the key file that gets looked at for “every” DNS lookup. Anything on your Linux system that uses gethostbyname() checks this file. In theory all I have to do is edit this file and it should fix my “RSS Error: WP HTTP Error: name look-up timed out” error.

/etc/resolv.conf

# Generated by NetworkManager
nameserver 192.168.100.164        # was 192.168.100.155

The quick test is to ping yahoo.com. Sure enough it works fine.

/etc/sysconfig

We know from experience, that the /etc/resolv.conf file is generated by the Fedora NetworkManager configuration service. So go into /etc/sysconfig and find all matches to the old IP address and switch it to the new. For example, I ran the following bash command line to find the files I needed to edit:

[root@kozik2 sysconfig]# grep -r 192.168.100.155 * | grep DNS
networking/profiles/default/ifcfg-eth0:DNS1=192.168.100.155
networking/devices/ifcfg-eth0:DNS1=192.168.100.155
network-scripts/ifcfg-eth0:DNS1=192.168.100.155

I went into each of these files and edited the 155 to 164. This way, when the NetworkManager gets run next, my /etc/resolv.conf won’t revert back to its old settings.

Not enough, I needed to troubleshoot

So I thought I knew what I was doing. If I fixed the DNS IP address, but I was still getting the “RSS Error: WP HTTP Error: name lookup timed out” error on my blog’s dashboard and the posting pages were very very slow to display. Something still wasn’t right with wordpress.

Since ping now worked, I decided to run tcpdump and see what’s going on:

tcpdump -n port 53 | grep 192.168.100.155

It just sat there. Good, I thought. Next, I then to retrieve my dashboard page, and then the tcpdump started rolling unsuccessful requests to the DNS server at .155 to resolve dashboard.wordpress.com. Ah, I forgot one last step!!

Don’t forget to restart the webserver!!

I forgot that on initialization, php reads from resolv.conf once and then continues to use that same IP address! A simple run of the following command fixed everything:

service httpd restart

I then refreshed my dashboard page and everything came up fast and normal.

Jquery ajax always parameters order inconsistent

My single page web application makes several REST style calls to my server.  Jquery $.ajax is the tool of choice for implementing this on the browser-side.  When converting from Jquery version 1.4 to 1.7+ I started to follow a their new coding pattern.  Here’s the old way, the new pattern and a little gotcha (the jquery ajax always parameters are inconsistent depending on success condition) that I write-up here for my reference and potentially the benefit of others.

Old mode of operation using jquery 1.4 $.ajax syntax:

$.ajax({"url":RestURI,
	"type":"GET",
	"dataType":"text",
	success: function(data) {
		// Do success stuff
		// stop logging timer, do logging
		// stop spinnger            
	}, /* end success: */
	
	error: function (req, stat, err) {    
		// Do error stuff        
		// stop logging timer, do logging
		// stop spinner
	} /* end error: */
});

New pattern using jquery 1.7 uses the chainable done(), fail(), and always() Promise methods derived from the  jQuery Deferred object. .See  code snippet below for the general case for how I use it.  It’s pretty straight forward.

// Start spinner
// start logging timer	 
$.ajax({"url":RestURI,
	"type":"GET",
	"dataType":"text"})
	.done( function(data, textStatus, jqXHR) {
		// Do success stuff
	}) /* end done: */
	
	.fail( function (jqXHR, textStatus, err) {
		// Do error stuff    
	}) /* end fail */
	
	.always( function() {
		// stop logging timer, do logging
		// stop spinner
	}); /* End always */

The new $.ajax pattern lets me have multiple always, done and fails; so I thought it would be good to add a .always at the top — something I would run first before a done or fail. Well, I discovered that the ajax always parameters are inconsistent; that is, the always function returns two different sets of parameters depending on how the $.ajax function returned. If the ajax function failed, then the always callback parameters were (jqXHR, textStatus, err ); if the ajax function was successful, the always callback parameters were flipped: (err, textStatus, jqXHR ).

So if you want to use the always function, then I recommend you setup a callback with no parameters or you do a test of the textStatus parameter — “success” This quirk got me a couple of times. I found references to it on the jQuery bug list — this behavior is by design. The code snippet below captures the issue.  This is not a show stopper; the new promise paradigm is pretty cool and I have some cases where I am chaining together multiple ajax calls, where this will come in really handy.

// Start spinner
// start logging timer	 
$.ajax({"url":RestURI,
	"type":"GET",
	"dataType":"text"})
	.always( function(arg1, textStatus, arg3 ) {
		// on fail: always( function(jqXHR, textStatus, err ) { and textStatus != "success"
		// on done: always( function(err, textStatus, jqXHR ) { and textStatus == "success"
		console.log("First Always function. textStatus="+textStatus+"\n");
	}) /* end always (top) */
	
	.done( function(data, textStatus, jqXHR) {
		// Do success stuff
	}) /* end done: */
	
	.fail( function (jqXHR, textStatus, err) {
		// Do error stuff    
	}) /* end fail */
	
	.always( function() {
		// stop logging timer, do logging
		// stop spinner
	}); /* End always */

Vint Cerf at Purdue

Vint Cerf Presenting at PurdueBackground on my Vint Cerf at Purdue posting:  I attended the celebration of the 50th anniversary of the founding of the Computer Science department at Purdue University. It was an excellent chance for me to see former professors, alumni and attend a day of presentations by distinguished lecturers, including Vint Cerf, on of the founders of the Internet.

Vint Cerf at Purdue on “Political, Economic, Social and Technical (PEST) Impacts of the Internet”

Purdue UniversityHe told the story of how the Internet was founded. He very modestly discribed his role and shared a bunch of factoids on the growth of the Internet and some new initatives.

Some topics that caught my attention: IPv6, internationalization (UTF-8), new gTLD, DNSSec, Sensor Networks, Smart Grid, Mobile Devices, poisoned caches, routing system hijacking, a laptop integrated into a surf board (surfing while surfing), digital vellum, Openflow and content based routing, MOOC, etc.

Interplanetary Internet

[youtube  http://www.youtube.com/watch?v=XTmYm3gMYOQ&w=300&h=180]

He was particularly excited telling his story behind interplanetary internet. He described the problem of transmitting a packet to Mars where the propagation distance can vary between 7-40 minutes of round trip delay, or even worse given that the planet rotates out of coverage once per Mars day. He described that space craft in the future will be designed to perform deep space network relaying for the benefit of follow on missions.

CDsHe was particularly stuck by a meeting with a librarian who showed him a perfectly preserved manuscript from over a 1000 years ago. He was challenged to show them any computer-based storage technology that can hold its meaning for a 1000 years (let alone 10)! He said that Internet needs to create a Digital Vellum.

Do Anything Differently?

A member of the audience asked what was wrong with the original Internet design and what he would do differently? The question was received with some laughter because it implied that Internet designer had done a less than perfect job, but in good spirit he answered: IP address space and security.

For IP addresses, they discussed number of countries, number of sites, and numbers of servers, based on creating something that mirrored the DARPA net. 32 bits was more than enough. They debated variable vs fixed length, and they decided that since the Internet was a research project that they could revisit the decision if they ever decided to productize the design.

For security, they had a whole nicely designed security solution worked-out with the military. It never got put into the Internet because noone had sufficient security clearance to know what the security architecture was going to be. Further, the original paper by Martin Hellmen, et al (Diffie-Hellmen Key Exchange) didn’t come out until the late 70s. If it had been around, they would have used it.

The Machine StopsAnother question from a student in the audience: (paraphrasing) … if the internet is connected to your bathroom scale, your refrigerator and everything in your refrigerator is RFI tagged, then what’s to stop the government from locking your refrigerator if you weigh too much? This question was asked very sincerely. Vint gave a good answer that said this question has been asked since the beginning of the Industrial Revolution around how machines are taking over our lives. He referred the student to a 1909 book called “The Machine Stops,” by E. M. Forester.

Video of Vint Cerf at Purdue

[youtube http://www.youtube.com/watch?v=_-8lXXzQ1e8 ]

Purdue posted a nice introductory video of Vint’s presentation; it’s only the first 3 minutes. At some point, I’d like to get Vint’s ppt slides and maybe a pointer to his full presentation. I’ll post here if I can.

I was honored to sit at his table for dinner. We shared a brief conversation about Google’s Serge LaChapple and his WebRTC project. Vint was very excited about it.

Lawson Building

Lawson Computer Science Building

Saratoga Weather Scripts Updated on NapervilleWeather.net

Saratoga Weather scripts built the above webpage for my personal weather station.

Since Feb of 2012, I have been running a version of my weather web site, NapervilleWeather.net using Saratoga Weather Scripts from Ken True’s Saratoga-Weather.org  website.  Ken supplies many of us home weather station enthusiasts with the php-scripts to make our local station data easy to read. We thank Ken for his work.

Since my original setup, many updates, tweaks and fixes to the Saratoga Weather Scripts have been released through Ken’s site, and recent I just applied updates to over 56 files; mostly php scripts and weather image files.  The update went smoothly and I wanted to use this blog entry to capture the work.

Check for Updates to the Saratoga Weather Scripts

I started at the Check for Updates page:

Check For Updates

From this dialog box I downloaded a 340K zip file with 56 files.  I unzipped the files.  I printed out the two README files, and then using them as a check list, one file at a time I copied them into my production website.

Copy New File Over Old Version

For each file, the README recommended what to do.  For example,

For quake-json.js, this was a new file for me, and I just copied to the root folder of my website.

For wu-radar-inc-php, I already had this file in place and I did a diff between the existing file and the updated file and confirmed that any customizations I may had done got carried forward.

After each file I copied onto my live site, I verified that nothing broke.

I run several linux machines.  The NapervilleWeather.net website runs on a virtual machine running Fedora 16 on the RackSpace Cloud.  The linux upgrade process is pretty automated, running scripts like  rpm, yum, or apt-get.  For the Saratoga Weather Scripts, you need to step through the updates one file at a time.  That’s ok. For 50 some files, it took awhile, and maybe next time I won’t wait a year to apply updates.

Noteworthy

It was nice to have my system brought up to date, and new for me is a really nice National Weather Service Alerts web page.  It’s an alternative to the previous web page that followed the NWS’s atom feed.  Thanks to Curly at http://www.weather.ricksturf.com/

A version controlled image of my website is store at the github repository jkozik/saratoga – this can give you a peak at how I store the files and format my contents.