xsendfile or xAccelRedirect and the downloads page

pilot830

New Member
YetiShare User
Jan 22, 2014
242
1
0
On the downloads page, /admin/download_current.php it says

Note: This page will not show any data if you are using xSendFile (Apache) or xAccelRedirect (Nginx) to handle your file downloads.

I appreciate the xsendfile and xaccelredirect support, because it allows downloads to go faster etc

but i was wondering, if we set all our downloads through xsendfile or xAccelRedirect - doesn't that then mean we have no sense of what's going on, in terms of how many downloads are currently going? so then we have no idea how busy the site is or anything

other than i guess checking the bandwidth on the servers...

Isn't there a way to maintain the xsendfile or xAccelRedirect , but still show download stats?

The most important point im trying to make is... If a file gets uploaded to server2.domain.com and then that file is now getting thousands of downloads, to the point where server2.domain.com has run out of bandwidth.. As an operator of the file hosting site, you will need to copy that file manually to other file servers, like server3.domain.com etc, and then update the serverId to the new file servers, so that new traffic goes to server3 or other file servers. But if on /admin/download_current.php it doesn't show any downloads because of everything running through xSendFile or xAccelRedirect , then how will we know what file is getting too many downloads so that we can take steps ?

Just looking at a server's bandwidth usage is not going to tell you what is the popular file that everyone is trying to access, that is killing your server resources

note: I am not actually using xsendfile or xaccelredirect yet.. my downloads page shows downloads and info.. but im planning on testing it out (xsendfile).. and i just realized well if i use xsendfile and i no longer have access to live download stats then I will have no way of knowing what file is getting so many downloads that it's raping my servers
 

paypal1352

New Member
YetiShare User
Wurlie User
Mar 2, 2012
297
2
0
+1,

currently im using x-accel-redirect and use a combination of google analytics and file stats from the admin panel to know whats happening
 

adam

Administrator
Staff member
Dec 5, 2009
2,046
108
63
Downloads are still logged, it's just current downloads you can't see. If you look at the download stats for each file you'll still see them.

Unfortunately as PHP hands off to xSendFile there's no way to know if the file is still being downloaded.
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
You should also remember that using xsendfile people can start downloads and don't finish them just to earn rewards from the rewards plugin.

I've been running a leech build by mysql since 2011 and started running yetishare 1 month ago without xsendfile on the same servers. Ys consumes a lot of memory and this can be optimized. My leech script use as low as 10mb/process while Ys uses up to 200 in some cases, without xsendfile. Everything else is low. Even with 600 requests on the same server (150 from Ys) the load is still 0 and the CPU usage is less than 10%. I don't see any need of using xsendfile at all.

About copying the file to another server: https://forum.mfscripts.com/viewtopic.php?f=13&t=2258&p=8031
 

pilot830

New Member
YetiShare User
Jan 22, 2014
242
1
0
enricodias4654 said:
You should also remember that using xsendfile people can start downloads and don't finish them just to earn rewards from the rewards plugin.
Thanks for pointing this out. I was unaware of this.

enricodias4654 said:
I don't see any need of using xsendfile at all.
The reason i'm interested is because, when I upload a 10gb.zip file to my yetishare site, and the file resides on server1.mydomain.com, and then I download that file from a SERVER on the SAME network, the speed goes kind of slow.. like 5 megabytes/second or 10.. or goes up to 15... but it varies from 5 mb/sec to 15mb/sec. Now if I take that same 10gb.zip file and place it on server1.mydomain.com in the public_html directory, not uploading through yetishare.. and then i download/pull the file again http://server1.mydomain.com/10gb.zip .. and download it onto the same server, the speed goes wayyyyyyyyyy faster.. like 60 megabytes/second or 85 megabytes/second etc.. it basically goes all the way up to 1 gbit

so this tells me that running downloads/files through php is slow.. I suspect this is the whole reason why adam/yetishare recommend xsendfile or xaccelredirect ....

i can imagine if my site got popular, people would complain about speed

But it sounds like you're saying, that if the script was optimized better, it could go those fast speeds i was referencing, 60 megabytes/second etc and still be through php?

enricodias4654 said:
About copying the file to another server: https://forum.mfscripts.com/viewtopic.php?f=13&t=2258&p=8031
I like the idea of being able to copy a file to another server, but only if that file is getting downloads, more than X number.. But that stuff you said about not having any raid on servers, to me doesnt sound good.. Because hard drives fail and if you dont have raid 1, then you will have to reinstall the OS/box and thats a hassle

And you saying this here:
>It would be better to not use any raid and have 2 copies of each file on different servers.

I don't think this would be a good idea, at least not for my setup.. because I don't see the point of copying every file that people upload, to the other file servers.....thats just going to take up a lot of unneccessary space...

I think it'd be better to have the script auto copy files that are getting x number of downloads, to the other servers.. because there is actual demand for those files...... why copy files to other servers that have no demand, thats just wasting space or wouldnt be worth it.
 

pilot830

New Member
YetiShare User
Jan 22, 2014
242
1
0
Adam, regarding this

enricodias4654 said:
You should also remember that using xsendfile people can start downloads and don't finish them just to earn rewards from the rewards plugin.
If this is true.. Would this still apply to xAccelRedirect ? Would nginx know if the download was completed or not, therefore not rewarding them if it hasnt been ? im assuming so because i think i saw some reference of xaccelredirect on the rewards plugin
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
pilot830 said:
The reason i'm interested is because, when I upload a 10gb.zip file to my yetishare site, and the file resides on server1.mydomain.com, and then I download that file from a SERVER on the SAME network, the speed goes kind of slow.. like 5 megabytes/second or 10.. or goes up to 15... but it varies from 5 mb/sec to 15mb/sec. Now if I take that same 10gb.zip file and place it on server1.mydomain.com in the public_html directory, not uploading through yetishare.. and then i download/pull the file again http://server1.mydomain.com/10gb.zip .. and download it onto the same server, the speed goes wayyyyyyyyyy faster.. like 60 megabytes/second or 85 megabytes/second etc.. it basically goes all the way up to 1 gbit

so this tells me that running downloads/files through php is slow.. I suspect this is the whole reason why adam/yetishare recommend xsendfile or xaccelredirect ....

But it sounds like you're saying, that if the script was optimized better, it could go those fast speeds i was referencing, 60 megabytes/second etc and still be through php?
Well, yes, You see, in my leech the script downloads a file from one fileserver, writes it into disk for caching and sends it to the user. All of this on a loop using sockets. I can puch this process to up to 60Mb/seg downloading from another server with 1gbps. The bottlenech is the IO from the sata drives. Reading from the cache I have no problem reaching 90Mb/seg. I didn't turn the leech off to run those tests, they were in parallel with other users downloading from the same server.

But if you are getting only between 5 and 15Mb/seg it may have to do with other factors. You should check your server I/O, track the php processes during the download, increasing the update time for the download_tracker and etc.


pilot830 said:
I like the idea of being able to copy a file to another server, but only if that file is getting downloads, more than X number.. But that stuff you said about not having any raid on servers, to me doesnt sound good.. Because hard drives fail and if you dont have raid 1, then you will have to reinstall the OS/box and thats a hassle
Most datacenters have automated scripts to install the OS. I have an automated script to configure apache, mysql and install other tools on a server. So reinstall the server for me is faster than rebuild an raid array.


pilot830 said:
And you saying this here:
>It would be better to not use any raid and have 2 copies of each file on different servers.

I don't think this would be a good idea, at least not for my setup.. because I don't see the point of copying every file that people upload, to the other file servers.....thats just going to take up a lot of unneccessary space...

I think it'd be better to have the script auto copy files that are getting x number of downloads, to the other servers.. because there is actual demand for those files...... why copy files to other servers that have no demand, thats just wasting space or wouldnt be worth it.
If you use raid 1 in one server you are losing 50% of disk space. If you use raid 0 (or no raid at all) on 2 servers but copy the files to both servers you still lose 50% of disk space, but you can balance the load between 2 servers.

And my point is not only doing a load balancing, but to make a high-availability cluster. You should also remember that bandwidth usually costs more than disk space.
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
I was reading the downloadTracker class and it connects to the mysql, sends an update and then closes the connection. This can affect the download speed depending on the latency and the update frequency.

I recommend keeping the mysql connection open when the update interval is too short (10 or 15 sec).

You can also write this information in files and have a cronjob reading all files and sending the information to the db in parallel.