Search Results - nginx

Disaster Recovery Plan For Your Magento Website

Creating a Disaster Recovery Plan for your Magento Site

Backups have a false sense of security in a way. They are a prominent and an essential component of a Disaster Recovery Plan. But while taking a simple backup is easy (a hosting provider will do it for you for example), creating a disaster recovery plan is not.

A disaster recovery plan drives what is backed up and where. It also guides with the restoration process.

Since it is difficult to actually know what disasters can happen, it is much easier to break down the problem and consider what scenarios we will consider. Examples could include :

  • Data Disk failure
  • Server failure – a server critical to functioning of the web site has failed. This could be our only server, db server, app server, etc.
  • Data corruption – either of the database or application
  • Operating system issues
  • Compromised system – we were hacked – but all data is safe

Each scenario decides what data would be needed to recover. Often forgotten items to backup are system configuration files modified for our site such as nginx or apache configuration, email configuration, external search engine configuration, etc.

  • Not all data has to be backed up – some files can be versioned allowing a restoration from a previous known working version.
  • Location of backup is another crucial factor – we always backup customer data off-site – at a cloud location other than the primary hosting location. For example a different AWS region or a completely different cloud provider. Versioned data is also stored in a known location.
  • Access to backup up data should be available – so pem keys, usernames, passwords, etc.
  • It is also important to store information of the system services installed and from where (the rpms, versions and yum repos). As with versioned items these should be updated when changed.

It should be clear what data would be lost. Is your database replicated or backed up periodically? How about media files – are they backed up immediately or on a schedule?

Many times, a service may be down temporarily and a decision may need to be made on either waiting for the repair.

The tradefoff may be between the hours of data lost vs the hours of service that will be lost.

Who has the switch to initiate the process?Also the process may be initiated but the final switch may require a second confirmation.

All such process should be documented.

Just because you have a backup does not mean the restoration will be instantaneous. Human factors may also need to be considered. An estimate per scenario will help the decision maker and set the expectations.

Recovery services charge on initiating the recovery process. The charges may apply even if the final go ahead is not given.

To get you started we have a template for a disaster recovery plan. Click here to download.

Alternatively signup for our service and get a bespoke plan for your situation.

We can analyze your site for free

Schedule a call

Before the call we will analyze your site using public information. We will ask you to give us a 1 day apache or nginx log file of your site. Our half hour consultation will try to identify what steps if any you should take to improve your sites performance goals.

Case Study : How luroConnect Insight detected network throttling by a cloud provider

Introduction

“The site is slow” is very common complaint made. A system administrator gets this complaint and is really not sure what to do. Typically the system administrator will

  • run top command to see if the system parameters are good, typically CPU and load average
  • check mysql slow log to see if any queries are slowing down

The typical solution might involve restarting various services

  • nginx
  • php-fpm
  • mysql

The problem seems to go away only sometimes to reappear. If on a cloud service, a solution might be to add more cloud app servers.
But, when CPU load is not high and load average is not high, what do you do?

We had one such instance for a luroConnect customer – a funded eCommerce site servicing the B2C and B2B markets. They had recently moved to a new service provider with larger servers including physical servers for the app and db with some VMs to take additional load.

CPU, memory, load average, slow logs did not report any load at all – infact we found that the CPU and load average were way below our expectation based on the hits. We then setup luroConnect Insight log file analytics to observe the nginx log file. The reponse vs upstream average plot was the one that alarmed us. Nginx log file parameters $request_time and $upstream_response_time are setup as part of the standard luroConnect Insight setup. This allows tracking of php responses independent of the response time which as an end-to-end response time.
$request_time : request processing time in seconds with a milliseconds resolution; time elapsed between the first bytes were read from the client and the log write after the last bytes were sent to the client
$upstream_response_time : keeps time spent on receiving the response from the upstream server; the time is kept in seconds with millisecond resolution.

The difference can be due to slower internet connection of a browser. However, at the high hit rate the site got, that was unlikely. The only possible explanation was a network issue with the cloud service provider, possibly throttling of the internet connection.

The resolution

We talked to the service provider with proof and convinced them of the method used to make the observations – which were clearly unique to what they had seen earlier. We then setup a test by creating a flood of requests from another server on the same subnet and got the same throttling result.
They investigated to find a misconfigured network equipment.

Conclusion

luroConnect was not designed with this use case in mind. But it was designed to give insights into the web server that regular monitoring tools do not give.

minify css js offline

Minify css and js for Magento as a build process

How to improve page load speed without server overhead so you can serve more pages.

Need for a build?

Magento being written in php, an interpretive language, the need to build is not essential for deployment. Moreover, since many small store owners are not technical or do not have a full time technical team, solutions that just work inline are preferred. For example, using plugins for css and js minify, or transfer to CDN as and when needed inline, or even use Google’s excellent pagespeed plugin.
Unfortunately, each one of these inline steps though improve page load speeds, result in a ever-so-slight slow down of the server each time. On a high traffic site, this results in inconsistent performance and user experience. We even zip the static content in .gz files so the web server (nginx in our case) does not have to spend a few milliseconds each time – assuming ofcourse you do not have a CDN that can zip.

Grunt, the task builder

We have used Grunt ( http://gruntjs.com/) as a task builder. Grunt is a popular javascript task builder written in nodejs. We use grunt to do many release oriented activities – packaging a release, installing a release, minify css, js, etc. In this article – a first of a series we plan – we will go through the process of installation of grunt and offer a solution to minify js and css flles as well as optimize images in the skin directory.

Installing grunt

  • Install nodejs and npm
    curl -sL https://raw.githubusercontent.com/nodesource/distributions/master/rpm/setup_4.x | sudo bash -
    sudo yum install nodejs npm
  • Install grunt
    sudo npm install -g grunt-cli
  • Download our Gruntfile.js and related code
    mkdir /scripts
    cd /scripts
    git clone https://github.com/luroconnect/gruntformagento.git
    cp –r gruntformagento/src/* .

Run grunt to minify css and js (and more)


cd /scripts
grunt optimize
Typical output :
Running "copy:skin" (copy) task
Created 229 directories, copied 1769 files

Running "copy:js" (copy) task
Created 197 directories, copied 893 files

Running "uglify:skin" (uglify) task
>> 30 files created.

Running "uglify:js" (uglify) task
>> 301 files created.

Running "cssmin:skin" (cssmin) task
>> 149 files created. 2.34 MB ? 1.77 MB

Running "imagemin:skin" (imagemin) task
Minified 1412 images (saved 400.26 kB)

Running "compress:skinjs" (compress) task
>> Compressed 32 files.

Running "compress:skincss" (compress) task
>> Compressed 155 files.

Running "compress:js" (compress) task
>> Compressed 332 files.

Done, without errors.

What is done by optimize :

  • Create 2 directories skin.min and js.min initially with identical content as skin and js respectively
  • Run the minifyfor css and js on the skin.min and js.min directories. .min.js files are not minified.
  • Run image optimizer on skin (png,jpeg)
  • Generate .gz gzipped files – for static delivery of gzip. See note below on nginx configuration.

Update Magento Web URLs

Update the Magento unsecure and secure skin and js URLs to point to skin.min and js.min respectively where minified content is kept.

Update nginx configuration

nginx configuration to load .gz static content if it exists

#/* static content can have expiry set to long */
location ~* \.(jpg|jpeg|gif|png|css|js|ico|swf|woff|woff2|svg|TTF)$ {
gzip_static on;
#access_log off;
log_not_found off;
expires 360d;
}

Gzip_static on tells nginx to serve the .gz file of a static file it exists rather than nginx compressing it.

Run optimizer on images in media/wysiwyg

grunt media
copy optimized images from media.min/wysiwyg to media/wysiwyg manually

Conclusion

We firmly believe in creating a documented release process. And Grunt with our Gruntfile.js goes a long way in making this a reality. In this article we have introduced the minfication, image optimization and gzip compression of static files. Try it and let us know if you have any suggestions.

This script can be run directly on the live server, but make sure you do it at a low traffic time.

Magento Managed Service and luroConnect Insight

Our Story

It was 2013. Magento community ver 1.7. We were working with the Chinese mobile phone manufacturer Huawei, developing their Magento website and a flash-sale website that could take 100 orders per minute sustained for 10 minutes. Flash sales by nature have many unpredictable peaks. Our tests indicated Magento could not take that many orders. The bottleneck was the database – or more specifically the locks used in Magento and held for as long as it did, leading to deadlocks or lock wait timeouts. While we optimized the code – like removed order grid update and put into a queue to be done later, we were up on a deadline, so we designed a simple cacheable page that would take a simple form submitting to a csv file and queued to add order to Magento.

Lessons Learnt

This early exercise taught us 2 important principles in scaling Magento – caching and queuing.

We quickly learnt another lesson – we could classify each activity the user did in the flash sale and each had a different need. Login, browsing and checkout. We called this laning – a set of servers with different CPU and memory characteristics and business importance. Once you were selected to be sold a phone, we put you on a checkout server pool.

To circumvent the Magento database issues in orders, we had to split the Magento order database into parts and merge them later. Each part was given max of 100 orders with pre-reserved order ids. When exhausted, the corresponding app server was taken out of the LB. This was our version of sharding – a fourth pillar we built into our scalable architecture.

Launching luroConnect

In early 2017, we launched luroConnect – a business focussed on building technology that ensures performance, scale and security of websites. We modelled our business as a “productized service” – we offer the service but in our domain with the features we offer. Like a product a customer can select us based on features, however, it is a service with a human element involved.

Bootstrapped, we added customers at a pace that we could build our technology – which consisted of

  • A scalable architecture – separating edge, app, db, caching, search, etc. Deployed either as VMs or containers inside a VM.
  • Security rules for hosting – ensuring appropriate folder security, separation of hosting and runtime users, secure access to logs, exports and imports, hosting rules to prevent access to folders from the browser, securing admin pages, etc.
  • luroConnect / Insight – a cloud app that monitors nginx log files and other system parameters such as disk and cache status, a dashboard to show many analytics, an alerting engine that reported slow or error hits, etc.

Our journey continues …

We continue our journey, onboarding customers to the new digital experience. On the way we added features to our stack on customer requirement

  • Image optimization – a frantic call from a customer sent us on the journey to solve that problem in a seamless manner for the customer. Image Optimization is part of our hosting stack. Magento does not have good support for quality image generation. Being application aware as well as being close the the application, we offer the most efficient image generation solution. Check our blog article on PWA and images.
  • WAF (Web Application Firewall) – a simple Web Application Firewall based on open source mod-security.
    This led to our own nginx build and a definition of the “luroConnect Edge”.

Today we have multiple customers onboarded. Our product continues to improve as we listen to our customers. We have almost no churn – indicating customers see continued value in our productized service.

If you think your website is a crucial asset of your business, consider our service. and enhance the digital experience of your visitor.  You can start by selecting one of the options below :

Though Leaders in Managed Hosting

We are changing the way managed hosting is perceived – from a SRE oriented to a full stack support oriented. Our CEO, Pradip Shah, speaks at various events. Notably the following

  • Magento Meetup Bangalore, Mar 2019, Bangalore, India
  • Meet Magento India, Jan 2018, Ahmedabad, India

We can analyze your site for free

Schedule a call

Not happy with your website performance and want an expert to look at it?

  • We will analyze your site using public information.
  • We will ask you to give us a 1 day web server log file.
  • We will try to identify what steps if any you should take to improve your sites performance goals.

Watch our webinar on performance and scaling in Magento

Its free!

Using analogy to vehicular traffic we explain performance and scaling in Magento.
Key takeaways

  • Know how to compare hosting options
  • Importance of good code
  • How to scale
  • Tuning Magento

Technical SEO : Know what google is indexing

Technical SEO and server logs

Many recent articles from SEO experts including Rand Fishkin have been giving importance to the “technical” part of SEO. Infact, Moz’s blog has a full category on technical SEO. Recently the more advanced technical SEO folks have started using server logs for analysis. Refer MOZ, Internet Marketing Podcast. When google or bing index your site, they use “bots” or crawlers. The only data you have of their visits is the server log. It is during this indexing process the search engines decide how to rank you. If they see a “Page Not Found” error on a page or a slow server performance, they will note and hurt your site rankings.

Google and other search engines visit your site regularly

The way your website shows up in search results in google is that google “indexes” your website. Technical SEO experts will tell you to that there are factors in your control such as robots.txt, meta tags, quality back links and sitemaps. In order to keep its index uptodate, google uses “bots” or crawlers to traverse your site. The bots would honor the robots.txt and sitemap, but would generally visit a page, get its content, find the “href” tags and traverse those pages as well. This technique is used by other search engines such as Microsoft Bing and yahoo. Other bots too visit the site – such as Alexa.

BOT traffic is not seen in google analytics

But do you or your SEO experts have the insight to what the google bots are actually seeing? Did they get a 404 not found error? How much time did your site respond in to the google bot?

Some of these are seen in Google webmaster tools search console, but the server has consolidated data from all crawlers. You cannot see this in the google analytics dashboard. The google analytics dashboard is created from the access to the site via the browser. Google bots do not use a browser to “crawl” your website.

Google is not the only crawler of your site – and you possibly need to know these hidden visitors and what they are seeing.

luroConnect Insight presents the “BOT Report”.

What is luroConnect Insight?

luroConnect Insight is a cloud platform that monitors Magento servers, giving insights with dashboards, alerts and reports.

What is the BOT report?

The BOT report tells you everything about the bots that visit your site.

  • Which bots visited your site in the last 24 hours
  • Which bots visited your site in the last one month
  • How fast did the server respond to on an average to all bots and to a specific bot
  • Were any errors seen by the bots?
  • What pages were visited by the bots?

These reports are generated daily and either visible on a dashboard or sent as an email.

What can I do with the BOT report?

  1. Know what bots are indexing your site.
  2. Act on errors – some may be old links. You can correlate this information with the google dashboard as well.
  3. Review the response time on URLs – some URLs such as category filters maybe too slow. If the speed cannot be fixed, the robots.txt or your sitemap.xml may be modified in consultation with your SEO expert.
  4. If the server may be consuming resources that would be better left for your real traffic. In consultation with an SEO expert, slow down crawling for selective BOTs.

How do I get my BOT report?

Easy! If you have a apache or nginx log file from your hosting, go to www.luroconnect.com/botreport/sample and upload the file and choose the type of report you want. You will receive the selected report in email in about ½ hour or less.

 

Alternative, signup for luroConnect Insights at luroconnect.com/signup. Fill in the preferred mode of contact and time. We will contact you. Based on your hosting configuration, we will quickly tell you if this service will work for you out-of-the-box. Our onboarding team will help you install the agent on the server (or with your permission install the agent). Wait for 24 hours and you will be able to download the BOT report.

Secure access to a Magento server

Today the biggest threat to your Magento production server are external threats – of being hacked. While you may not be a high value target, hackers run crawlers on the internet to discover servers with weak security and attack. In this article we discuss secure access to a Magento server. An OS level attack if successful can only be fully repelled by re-imaging the server. But preventing a OS level attack is easier than you think – if you follow some simple guidelines.

A Magento production server should have restricted access for all. Insecure, password based access should be disabled. If more than one server is used in a constellation, ssh access to the setup should be restricted to only one server.
(more…)

Will HTTP/2 help my Magento Store?

Introduction

HTTP/2 is the new http standard. Most browsers, including Chrome, Opera, Firefox, Internet Explorer 11, Safari, Amazon Silk and Microsoft Edge support HTTP/2. Nginx and other web servers too support HTTP/2. Magento 1.x and Magento 2 work very well with HTTP/2. In this article we see the benefit of HTTP/2 and give some configuration recommendations for Magento store owners or administrators.

What are key differences of HTTP/2 ?

At a high level, HTTP/2:

  • is binary, instead of textual
  • is fully multiplexed, instead of ordered and blocking
  • can therefore use one connection for parallelism
  • uses header compression to reduce overhead
  • allows servers to “push” responses proactively into client caches (more…)