How does Nginx work? What are the load balancing strategies? How to limit traffic
3.The listenfd of all worker trips will become readable when a new connection arrives.In order to ensure that only one trip handles the connection, all worker trips preempt accept_mutex before registering the listenfd read event, and grab the trip of the mutex lock Register the listenfd read event, call accept in the read event to accept the connection,
4.When a worker process accepts the connection, it starts to read the request, resolve the request, process the request, generate data, and then send it back to the client end, the connection is finally disconnected,
5.What are the common commands of Nginx (java project fhadmin.cn)?
Start nginx,
stop nginx-s stop or nginx-s quit,
restart nginx-s reload or service nginx reload,
multiload the specified configuration file.nginx-c/usr/local/nginx/conf/nginx.conf,
View nginx version nginx-v,
6, what is the difference between 500, 502, 503, 504 in nginx?
500:
Internal Server Error Internal service errors, such as script errors, programming language syntax errors,
502:
Bad Gateway error, gateway error, such as too many current connections to the server, too slow response, too much page material, slow bandwidth,
503:
Service Temporarily Unavailable, the service is unavailable, the web server cannot handle HTTP requests, it may be temporarily overloaded or the server is down for maintenance,
504:
Gateway timeout The gateway times out, the program execution time is too long and the response times out.For example, the program needs to be executed for 20 seconds, and the maximum response waiting time of nginx is 10 seconds, so there will be a timeout,
7.Do you understand Nginx compression? How to enable compression?
After nginx gzip compression is enabled, the size of static resources such as pictures, css, and js will be reduced, which can save bandwidth and improve transmission efficiency, but it will consume CPU resources.
On:
#?Turn on gzip
gzip?off;
#?The minimum file size to enable gzip compression, files smaller than the set value will not be compressed
gzip_min_length?1k;
#?gzip? Compression level, 1-9, the higher the number, the better the compression, and the more CPU time it takes.Details will be given later
gzip_comp_level?1;
#?The file type to be compressed, javascript has various forms, the value of which can be found in the mime.types file,
gzip_types?text/plain?application/javascript?application/x-javascript?text/css?application/xml?text/javascript?application/x-httpd-php?image/jpeg?image/gif?image/png?application/vnd.ms-fontobject?font/ttf?font/opentype?font/x-woff?image/svg+xml;
8.Differences between Nginx and Apache and Tomcat
1.Nginx/Apache is a Web Server, while Apache Tomact is a servlet container
2.Tomcat can perform resolution on jsp, while nginx and apache are just The web server can be simply understood as only providing html static file services,
The difference between Nginx and Apache (java project fhadmin.cn):
1) Nginx is lightweight and also serves as a web service, occupying less memory and resources than apache,
2) Nginx is anti-concurrency.nginx processes requests asynchronously and non-blocking, while apache is blocking.Under high concurrency, nginx can maintain low resource consumption and high performance.
3) Nginx provides load balancing and can be used as a reverse proxy, front-end server
4) Nginx multi-stroke single thread, asynchronous non-blocking; Apache multi-stroke synchronization, blocking,
9.What load balancing strategies does Nginx have?
The default load balancing strategies provided by Nginx:
1.Polling (default) round_robin
Each request is allocated to different back-end servers one by one in chronological order.If the back-end server goes down, it can be automatically eliminated.
2.IP hash ip_hash
Each request is allocated according to the hash result of the access ip, so that each visitor can access a back-end server fixedly, which can solve the problem of session sharing,
Of course, in actual scenarios, it is generally not considered to use ip_hash to solve session sharing,
3, least connections least_conn
The next request will be dispatched to the server with the least number of active connections
4.The larger the value of the weight weight
weight, the higher the access probability allocated.It is mainly used in the case where the performance of each server in the backend is unbalanced to achieve a reasonable resource utilization,
Additional strategies are also supported through plugins,
10.Have you done Nginx dynamic and static resource separation, why do you do this?
The separation of dynamic resources and static resources is to allow the dynamic web pages in the dynamic website to separate the constant resources from the frequently changing resources according to certain rules.
For example, js, css, hrml are sent back from server A, pictures are sent back from server B, and other requests are sent back from Tomcat server C,
The background applications are deployed separately to improve the speed of users' access to static code, and now there is a CDN service, so there is no need to limit the bandwidth of the server,
11.Do you understand the ngx_http_upstream_module module? The
ngx_http_upstream_module module is used to define multiple servers into server groups, which can be referenced by fastcgi delivery, proxy delivery, uwsgi delivery, memcached delivery and scgi delivery directives,
For example, visit www.a.com cache + scheduling:
http{
proxy_cache_path?/var/cache/nginx/proxy_cache?levels=1:2:2?keys_zone=proxycache:20m?inactive=120s?max_si?#cache
ze=1g;
upstream?mysqlsrvs{
ip_hash;?#Source address hash scheduling method? It is not available if backup is written
server?172.18.99.1:80?weight=2;?#weight weight
server?172.18.99.2:80;??????????# mark down, use with ip_hash, implement grayscale publishing
server?172.18.99.3:80?backup;???#backup mark the server as "Standby", i.e.enabled when all servers are unavailable?
}
}
server{
server_name?www.a.com;
proxy_cache?proxycache;
proxy_cache_key? $request_uri;
proxy_cache_valid?200?302?301?1h;
proxy_cache_valid?any?1m;
location?/?{
proxy_pass? http://mysqlsrvs;
}
}
12.Do you understand the current limit? How is the current limited?
Nginx provides two current limiting methods, one is to control the rate, the other is to control the number of concurrent connections,
1.Rate control
ngx_http_limit_req_module? The module provides a leaky bucket algorithm, which can limit the request processing frequency of a single IP,
Example:
1.1 Normal current limit:
http?{
limit_req_zone?192.168.1.1?zone=myLimit:10m?rate=5r/s;
}
server?{
location?/?{
limit_req?zone=myLimit;
rewrite?/? http://fhadmin.cn?permanent;
}
}
Explanation of arguments:
key:? Define the object that needs to be limited,
zone:? Define the shared memory area to save the access information,
rate:? Used to set the maximum access rate,
Indicates that the current limit is based on the client 192.168.1.1, and a memory area with a size of 10M named myLimit is defined, which is used to save IP address access information,
rate sets the IP access frequency, rate=5r/s means that only 5 requests per IP address can be processed per second,
Nginx's current limit is in milliseconds, which means that processing 5 requests in 1 second will become only one request every 200ms.If 1 request has been processed within 200ms, there are still new requests arriving., then Nginx will refuse to process the request,
1.2 Burst traffic limit access frequency
The above rate is set to ?5r/s.If the traffic suddenly increases suddenly, the excess request will be rejected and sent back to 503.It is not good for the sudden traffic to affect the business.
At this time, you can add the burst? argument, which is generally used in conjunction with ?nodelay?,
server?{
location?/?{
limit_req?zone=myLimit?burst=20?nodelay;
rewrite?/? http://fhadmin.cn?permanent;
}
}
burst=20 nodelay? indicates that these 20 requests are processed immediately and cannot be delayed, which is equivalent to special handling.However, even if these 20 burst requests are processed immediately, subsequent requests will not be processed immediately,
burst=20 nodelay?/p>
burst=20? Equivalent to 20 pits in the cache queue, even if the request is processed, these 20 positions can only be released every 100ms,
2.Control the number of concurrent connections
ngx_http_limit_conn_module? Provides the function of limiting the number of connections,
limit_conn_zone?$binary_remote_addr?zone=perip:10m;
limit_conn_zone?$server_name?zone=perserver:10m;
server?{
...
limit_conn?perip?10;
limit_conn?perserver?100;
}
The key for limit_conn perip 10? is?$binary_remote_addr, which means that a single IP can hold a maximum of 10 connections at the same time.
limit_conn perserver 100? The key used is? $server_name, which indicates the total number of concurrent connections that the virtual host (server) can handle at the same time,
Note: The key of limit_conn perserver 100? is?$server_name, which means the total number of concurrent connections that the virtual host (server) can handle at the same time,
Extension:
If you don't want to limit the current, you can also set a whitelist:
Using the functions provided by the two tool modules of Nginx?ngx_http_geo_module?and?ngx_http_map_module?,
##Define whitelist ip string variable
geo?$limit?{
default?1;
10.0.0.0/8?0;
192.168.0.0/10?0;
81.56.0.35?0;
}
0 Comments