PLEASE NOTE: This article may be out-of-date. Please visit the Support site for the latest information.

Massive Refresh Underway

8 posts / 0 new
Last post
puravida's picture
puravida
Jedi Warrior
Offline
Joined: 09/01/2007
Visit puravida's Website

This is a quick notice that we are in the process of refreshing more than 1,280,000 requests that were marked as "NS_ERROR_UNKNOWN_HOST" and it will take a long while to catch those up, since we also have to capture more than 5,000 new requests per hour.

The reason for this large batch of refreshes is that we discovered some DNS issues with the Level3 DNS servers we used previously. There were a good many sites that did not resolve and came up with failed captures as a result. Since there is no way for us to separate out the ones that should have worked, we are forced to retry all of them.

The new DNS servers we are using seem to be much more complete and reliable. We hope to see a good many previously bad requests come back as good now.

After these complete, we will also put more than 250,000 back in Q for "RENDER_ENGINE_HUNG" so that we can be sure those are not a problem also.

New captures may be sluggish or delayed during this massive refresh. If the delay becomes too excessive, then we may take measures to deploy more capture generators or modify the prioritization a bit.

2020media
Offline
Joined: 02/25/2011
Visit 2020media's Website

The google public DNS resolvers have been a handy backup for me.

see http://code.google.com/speed/public-dns/docs/using.html

puravida's picture
puravida
Jedi Warrior
Offline
Joined: 09/01/2007
Visit puravida's Website

Glad to hear that. Those are the ones we have switched over to using now. Smile

puravida's picture
puravida
Jedi Warrior
Offline
Joined: 09/01/2007
Visit puravida's Website

Quick Update: The refresh is progressing quickly and we are knocking out a total of about 25,000-30,000 requests per hour. That suggests that the current refresh will complete in 2-3 days. The current Q prioritization for new requests does not appear to be too negatively affected, so we will hold off on adding new generators for now.

puravida's picture
puravida
Jedi Warrior
Offline
Joined: 09/01/2007
Visit puravida's Website

Quick Update: We had a major influx of new requests over the past 24 hours, but the generators still managed to capture all new plus another 500,000 of the refreshes during that time. So things are moving along smoothly.

I have gone ahead to put the last 250,000 "RENDER_ENGINE_HUNG" into Q and will monitor that progress also. These types of errors are the most troublesome and time-consuming but I think the new Q prioritization algorithm will handle them well. If not, we will take action to mitigate any slowdown.

puravida's picture
puravida
Jedi Warrior
Offline
Joined: 09/01/2007
Visit puravida's Website

Quick Update: The refresh is still running. Those "RENDER_ENGINE_HUNG" errors are the worst kind because most of them really are broken and are so poorly configured that they delay everything. However, a good many of the 250,000 have been reprocessed. Once those are gone, the remaining 900,000 refreshes should go quickly.

We've noticed that new requests have slowed a bit but still within reason, considering. So new request response times are currently between 30 seconds up to a few hours. That is acceptable for now and this slowdown should pass in the next 24 hours (once the troublesome requests are refreshed).

puravida's picture
puravida
Jedi Warrior
Offline
Joined: 09/01/2007
Visit puravida's Website

Quick Update: I've been watching the Queue very closely over the past few days, especially the last 24 hours. I noticed that the really troublesome refresh requests have been completed but the Queue still was not dropping as fast as anticipated.

Therefore, I took the last 6 hours to investigate and optimize the Q performance. I discovered a few ways to optimize the logic and refine the queries, even eliminating others, and managed to cut the script run time down by 75%. I then noticed that, for some reason, with the Queue this high and the specific configuration of dedicated generators; the Queue for new requests was building. After some tweaking, I figured out that the refreshes were getting done first, which is unacceptable. I have deployed a new version of the Q optimization and have been watching it for the past few hours. It looks as though it is running much smoother now and the new requests are dropping quickly.

I will continue to watch and be sure that no new issues arise from this latest round of changes.

puravida's picture
puravida
Jedi Warrior
Offline
Joined: 09/01/2007
Visit puravida's Website

Quick Update: The refresh has completed as of a few hours ago. All systems are back to normal, handling the typical influx of requests with about 95% sitting at idle.

Topic locked

ShrinkTheWeb® (About STW) is another innovation by Neosys Consulting

Contact Us | PagePix Benefits | Learn More | Our Partners | Privacy Policy | Terms of Use

©2014 ShrinkTheWeb. All rights reserved. ShrinkTheWeb is a registered trademark of ShrinkTheWeb.