Scaling Super-Thread

sukhman21

Member
YetiShare User
YetiShare Supporter
Jan 26, 2015
508
3
18
oh i agree with you 100 that they definitely need to hire more people and fix all the bugs and introduce new features faster than their competitors. In my case, i only get 10% of video traffic but i agree it should be fixed for sure for everybody.
 

zomkornephoros5859

New Member
YetiShare User
Reservo User
Apr 6, 2016
4
0
1
I migrated from the other Perl based script referenced earlier in the thread to yetishare and have seen that yetishare is much more reliable when it comes to high traffic.
I get 40 million page views a month according to google analytics and the total number of raw requests i get to the front-end webservers is typically 500,000,000 per month. Typical requests to front end servers is around 300-400 rps.

I'm running a video site with yetishare and currently host around 140TB of video on 14 or so storage servers with the biggest server having 72TB of storage and 10gbits of bandwidth.
The PHP version i'm running on all servers is 7.0 and i'm running nginx with X-Accel to handle the downloads.
Average upload traffic is 6-8gbits.

For graph lovers -



For the front end i'm running two Webservers which are replicated, these have Xeon 1225v2 CPUs which is more than enough for yetishare surprisingly.
Visitors do not connect to these directly, i'm running two nginx load balancers on cloud providers in two separate continents for redundancy and these proxies/loadbalancers perform real time health checking on the backend webservers and also do basic layer 7 request filtering to deter attacks.

Yetishare is very database heavy which will be your main problem on a high traffic site. This is the main challenge for us and we have 2 large 16 core mariaDB servers which are replicated to handle the SQL traffic.
Yetishare stores download statistics, download tokens and tracks download progress via the database and all this builds up to around 2-3 thousand queries a second to the databases.
We are looking to migrate download tokens to a Redis instance soon however to fix this.

We have a basic in house CDN for handling high traffic videos as our storage servers run only SATA/SAS disks which can't handle the IO a few thousand concurrent streams create. We have 6 SSD servers replicated from one master server that yetishare moves the video to. These servers are then accessed via one DNS name which allows round robin load balancing to distribute users to all the CDN nodes.

If you are running a video site like me you will most likely be using the media converter plugin. It's best to run this on a separate server as it's much easier to manage and takes any unwanted load off the storage servers. We host these as cloud instances and can scale the number of them up or down depending on how many thumbnails need to be processed. We are currently storing the thumbs on the webservers which is standard for yetishare however we are looking to move these to a dedicated server in the future as handling millions of little images can be a headache for your application servers.

To conclude, if you're looking to scale out, run PHP7 and tune your Nginx instances and keep a close eye on your Mysql databases as that is currently the bottleneck.
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
zomkornephoros5859 said:
For the front end i'm running two Webservers which are replicated, these have Xeon 1225v2 CPUs which is more than enough for yetishare surprisingly.
Visitors do not connect to these directly, i'm running two nginx load balancers on cloud providers in two separate continents for redundancy and these proxies/loadbalancers perform real time health checking on the backend webservers and also do basic layer 7 request filtering to deter attacks.

Yetishare is very database heavy which will be your main problem on a high traffic site. This is the main challenge for us and we have 2 large 16 core mariaDB servers which are replicated to handle the SQL traffic.
Yetishare stores download statistics, download tokens and tracks download progress via the database and all this builds up to around 2-3 thousand queries a second to the databases.
We are looking to migrate download tokens to a Redis instance soon however to fix this.

We have a basic in house CDN for handling high traffic videos as our storage servers run only SATA/SAS disks which can't handle the IO a few thousand concurrent streams create. We have 6 SSD servers replicated from one master server that yetishare moves the video to. These servers are then accessed via one DNS name which allows round robin load balancing to distribute users to all the CDN nodes.
I think your configuration is a bit overkill.

My website have almost as many TB as yours and way more requests. I run the database in the main server with the website. It is a small opteron server with 32gb ram still using mysql and php5. 16gb ram is always free, the cpu usage is always under 40% and we don't use SSDs. We use apache in all servers and we serve the downloads using pure php. The cpu usage in the storage servers is around 5%.
 

zomkornephoros5859

New Member
YetiShare User
Reservo User
Apr 6, 2016
4
0
1
I wouldn't say its overkill at all.
Its designed for high uptime and redundancy in mind.

I'd love to see you try and dump a yetishare mysql database including the 100+million rows in the stats table twice a day for backups without causing any performance issues on non SSD disks. For us it was impossible.
 

sukhman21

Member
YetiShare User
YetiShare Supporter
Jan 26, 2015
508
3
18
i was thinking of having 2 webservers with a cloud SQL server and since you are already running this, How is the latency and site speed ?
eg: if your 1 webserver and sql is hosted say in north america and a loadbalancer connects your visitor from australia to your 2nd webserver say its in europe. so your europe server would need to connect to SQL obviously. How is the speed on that and does it affect loading of the website much ?
Can someone share your website URL here or Private message me? i would like to take a look at your sites and if you have good english shows, i will be one of your regular visitors lol...
 

zomkornephoros5859

New Member
YetiShare User
Reservo User
Apr 6, 2016
4
0
1
sukhman21 said:
i was thinking of having 2 webservers with a cloud SQL server and since you are already running this, How is the latency and site speed ?
eg: if your 1 webserver and sql is hosted say in north america and a loadbalancer connects your visitor from australia to your 2nd webserver say its in europe. so your europe server would need to connect to SQL obviously. How is the speed on that and does it affect loading of the website much ?
Can someone share your website URL here or Private message me? i would like to take a look at your sites and if you have good english shows, i will be one of your regular visitors lol...
It could add 200-300ms to your load time depending on locations and providers/peering.
US/Canada and Europe are fine I've found especially if you're on the same server provider in these two locations as internal peering is normally used giving you better connectivity.
I'd also recommend something like CloudFlare is put in front of your site also to speed up page loading.
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
zomkornephoros5859 said:
I wouldn't say its overkill at all.
Its designed for high uptime and redundancy in mind.

I'd love to see you try and dump a yetishare mysql database including the 100+million rows in the stats table twice a day for backups without causing any performance issues on non SSD disks. For us it was impossible.
As I said, we did several modifications in the original code to improve performance. Backups are not an issue to us since most of the mysql data stays cached in memory all the time, and we archive old records from some tables (not the stats).

Also, you don't need to make a mysqldump in order to backup your database, you can backup the mysql folder itself.

You should also notice that hardware raid controllers have write cache, usually 1gb. It means that the controller can receive up to 1gb of data save it in the ram first. It basically transforms random writes in sequential writes and usually have better performance than simple SSD disks in a software raid.
 

zomkornephoros5859

New Member
YetiShare User
Reservo User
Apr 6, 2016
4
0
1
You use memory cache as a backup solution? xD

Copying the mysql folder will never provide a reliable and consistent backup. To do it you must shut down or lock the database otherwise you'll have a database in an inconsistant state.
Mysqldump will always be consistent and easy to restore

If you can find me a hardware raid with mechanical disks that outperforms an SSD softraid in raw throughput and IOPS then i'll be impressed.
Once that cache starts filling up with sustained disk activity and it starts flushing to disk it will all go downhill fast.
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
zomkornephoros5859 said:
You use memory cache as a backup solution? xD
You claimed that SSDs are needed in order to backup large tables. I said that this is not true since backup performs read only operations and most of the mysql data stays in memory.

zomkornephoros5859 said:
Copying the mysql folder will never provide a reliable and consistent backup. To do it you must shut down or lock the database otherwise you'll have a database in an inconsistant state.
Mysqldump will always be consistent and easy to restore
You will never have a consistent backup on yetishare because the original code does not use transactions. Locking the entire db in this case may cause loss of data (or even files) in the current downloads and uploads. If the script just received an upload and get a timeout when trying make an insert in the db, the upload will be lost.

zomkornephoros5859 said:
If you can find me a hardware raid with mechanical disks that outperforms an SSD softraid in raw throughput and IOPS then i'll be impressed.
Once that cache starts filling up with sustained disk activity and it starts flushing to disk it will all go downhill fast.
If your mechanical disk array can handle 300mb/seg for write (just an example) and your avg write is less than 300mb/seg, a hardware raid controller with write cache can have better performance than a simple SSD on random writes. It can also handle peaks of writes up to 1gb without loosing performance. Your OS will have a response time of 0ms on the write operations, just like writing in RAM.

The raid controller is always flushing the cache on disk. The magic here is that it turns several small random writes in a single big sequential write, increasing the IOPs for mechanical disks. Those controllers can also read and write in multiple drives at the same time, unlike software raid.
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
zomkornephoros5859 said:
You use memory cache as a backup solution? xD
You claimed that SSDs are needed in order to backup large tables. I said that this is not true since backup performs read only operations and most of the mysql data stays in memory.

zomkornephoros5859 said:
Copying the mysql folder will never provide a reliable and consistent backup. To do it you must shut down or lock the database otherwise you'll have a database in an inconsistant state.
Mysqldump will always be consistent and easy to restore
You will never have a consistent backup on yetishare because the original code does not use transactions. Locking the entire db in this case may cause loss of data (or even files) in the current downloads and uploads. If the script just received an upload and get a timeout when trying make an insert in the db, the upload will be lost.

zomkornephoros5859 said:
If you can find me a hardware raid with mechanical disks that outperforms an SSD softraid in raw throughput and IOPS then i'll be impressed.
Once that cache starts filling up with sustained disk activity and it starts flushing to disk it will all go downhill fast.
If your mechanical disk array can handle 300mb/seg for write (just an example) and your avg write is less than 300mb/seg, a hardware raid controller with write cache can have better performance than a simple SSD on random writes. It can also handle peaks of writes up to 1gb without loosing performance. Your OS will have a response time of 0ms on the write operations, just like writing in RAM.

The raid controller is always flushing the cache on disk. The magic here is that it turns several small random writes in a single big sequential write, increasing the IOPs for mechanical disks. Those controllers can also read and write in multiple drives at the same time, unlike software raid.


PS.: This reply got posted 2x, don't know why.
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
For those who are interested, you can see an example in the attached image. It's a print screen from one of my servers (not used to host websites, just for this example).

You can see in the graphic interface that all write operations have 0ms response time, just the read operations have some real response time. There is a list of several write requests to several different files. Without the write cache this would be very slow in mechanical drives. I compared this to my laptop (using an 512gb samsung 850 pro ssd) and my write operations here have between 1 and 5ms.

This server uses just 2 sata drives in raid 1 and the controller has only 512mb ram for cache. Some file names has been blurred in the print screen for privacy reasons.
 

Attachments