Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read through the nginx code and figure out why it's faster #10

Open
jyn514 opened this issue Mar 16, 2019 · 6 comments
Open

Read through the nginx code and figure out why it's faster #10

jyn514 opened this issue Mar 16, 2019 · 6 comments

Comments

@jyn514
Copy link
Owner

jyn514 commented Mar 16, 2019

No description provided.

@jyn514
Copy link
Owner Author

jyn514 commented Mar 16, 2019

@jyn514
Copy link
Owner Author

jyn514 commented Mar 17, 2019

Note: this is only for large files, we're within a factor of 2 for small requests but we're behind by 3 orders of magnitude for large files. I'm seeing TCP: request_sock_TCP: Possible SYN flooding on port 80. Sending cookies. Check SNMP counters. in dmesg, maybe if we accepted more simultaneous connections the kernel wouldn't throttle us? This could also just be because nginx uses mmap, that's on the feature list.

@jyn514
Copy link
Owner Author

jyn514 commented Mar 17, 2019

This was an error on my part - my nginx config was set up wrong so it was serving a 404 on every page, which understandably is significantly faster than sending back 7 megs of binary. We're down to a factor of three difference (yay!)

@jyn514
Copy link
Owner Author

jyn514 commented Mar 17, 2019

Down to factor of 2 with #13.

Nginx for a 22MB file:

$ ab -n 1000 -c 100 localhost/perf.data.old
This is ApacheBench, Version 2.3 <$Revision: 1807734 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/

Benchmarking localhost (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests


Server Software:        nginx/1.14.0
Server Hostname:        localhost
Server Port:            80

Document Path:          /perf.data.old
Document Length:        22897320 bytes

Concurrency Level:      100
Time taken for tests:   6.105 seconds
Complete requests:      1000
Failed requests:        0
Total transferred:      22897586000 bytes
HTML transferred:       22897320000 bytes
Requests per second:    163.80 [#/sec] (mean)
Time per request:       610.490 [ms] (mean)
Time per request:       6.105 [ms] (mean, across all concurrent requests)
Transfer rate:          3662785.98 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    1   0.5      0       3
Processing:   450  608  20.7    615     716
Waiting:        0    3   7.2      0      36
Total:        450  609  20.4    616     718
WARNING: The median and mean for the initial connection time are not within a normal deviation
        These results are probably not that reliable.

Percentage of the requests served within a certain time (ms)
  50%    616
  66%    618
  75%    618
  80%    619
  90%    623
  95%    624
  98%    625
  99%    627
 100%    718 (longest request)

We're hovering around 1100, see #13 for details

@jyn514
Copy link
Owner Author

jyn514 commented Mar 17, 2019

TCP: request_sock_TCP: Possible SYN flooding on port 80. Sending cookies. Check SNMP counters. has gone away for requests of 22MB, it's still present trying to get a 77MB file though. I'm inclined to leave it, I doubt 100 people at a time are downloading files that big off a single server is that common.

@jyn514
Copy link
Owner Author

jyn514 commented Mar 24, 2019

We could also look into thread pools and see if that makes a difference. That does mean implementing proper semaphores.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant