oh i agree with you 100 that they definitely need to hire more people and fix all the bugs and introduce new features faster than their competitors. In my case, i only get 10% of video traffic but i agree it should be fixed for sure for everybody.
I think your configuration is a bit overkill.zomkornephoros5859 said:For the front end i'm running two Webservers which are replicated, these have Xeon 1225v2 CPUs which is more than enough for yetishare surprisingly.
Visitors do not connect to these directly, i'm running two nginx load balancers on cloud providers in two separate continents for redundancy and these proxies/loadbalancers perform real time health checking on the backend webservers and also do basic layer 7 request filtering to deter attacks.
Yetishare is very database heavy which will be your main problem on a high traffic site. This is the main challenge for us and we have 2 large 16 core mariaDB servers which are replicated to handle the SQL traffic.
Yetishare stores download statistics, download tokens and tracks download progress via the database and all this builds up to around 2-3 thousand queries a second to the databases.
We are looking to migrate download tokens to a Redis instance soon however to fix this.
We have a basic in house CDN for handling high traffic videos as our storage servers run only SATA/SAS disks which can't handle the IO a few thousand concurrent streams create. We have 6 SSD servers replicated from one master server that yetishare moves the video to. These servers are then accessed via one DNS name which allows round robin load balancing to distribute users to all the CDN nodes.
It could add 200-300ms to your load time depending on locations and providers/peering.sukhman21 said:i was thinking of having 2 webservers with a cloud SQL server and since you are already running this, How is the latency and site speed ?
eg: if your 1 webserver and sql is hosted say in north america and a loadbalancer connects your visitor from australia to your 2nd webserver say its in europe. so your europe server would need to connect to SQL obviously. How is the speed on that and does it affect loading of the website much ?
Can someone share your website URL here or Private message me? i would like to take a look at your sites and if you have good english shows, i will be one of your regular visitors lol...
As I said, we did several modifications in the original code to improve performance. Backups are not an issue to us since most of the mysql data stays cached in memory all the time, and we archive old records from some tables (not the stats).zomkornephoros5859 said:I wouldn't say its overkill at all.
Its designed for high uptime and redundancy in mind.
I'd love to see you try and dump a yetishare mysql database including the 100+million rows in the stats table twice a day for backups without causing any performance issues on non SSD disks. For us it was impossible.
You claimed that SSDs are needed in order to backup large tables. I said that this is not true since backup performs read only operations and most of the mysql data stays in memory.zomkornephoros5859 said:You use memory cache as a backup solution? xD
You will never have a consistent backup on yetishare because the original code does not use transactions. Locking the entire db in this case may cause loss of data (or even files) in the current downloads and uploads. If the script just received an upload and get a timeout when trying make an insert in the db, the upload will be lost.zomkornephoros5859 said:Copying the mysql folder will never provide a reliable and consistent backup. To do it you must shut down or lock the database otherwise you'll have a database in an inconsistant state.
Mysqldump will always be consistent and easy to restore
If your mechanical disk array can handle 300mb/seg for write (just an example) and your avg write is less than 300mb/seg, a hardware raid controller with write cache can have better performance than a simple SSD on random writes. It can also handle peaks of writes up to 1gb without loosing performance. Your OS will have a response time of 0ms on the write operations, just like writing in RAM.zomkornephoros5859 said:If you can find me a hardware raid with mechanical disks that outperforms an SSD softraid in raw throughput and IOPS then i'll be impressed.
Once that cache starts filling up with sustained disk activity and it starts flushing to disk it will all go downhill fast.
You claimed that SSDs are needed in order to backup large tables. I said that this is not true since backup performs read only operations and most of the mysql data stays in memory.zomkornephoros5859 said:You use memory cache as a backup solution? xD
You will never have a consistent backup on yetishare because the original code does not use transactions. Locking the entire db in this case may cause loss of data (or even files) in the current downloads and uploads. If the script just received an upload and get a timeout when trying make an insert in the db, the upload will be lost.zomkornephoros5859 said:Copying the mysql folder will never provide a reliable and consistent backup. To do it you must shut down or lock the database otherwise you'll have a database in an inconsistant state.
Mysqldump will always be consistent and easy to restore
If your mechanical disk array can handle 300mb/seg for write (just an example) and your avg write is less than 300mb/seg, a hardware raid controller with write cache can have better performance than a simple SSD on random writes. It can also handle peaks of writes up to 1gb without loosing performance. Your OS will have a response time of 0ms on the write operations, just like writing in RAM.zomkornephoros5859 said:If you can find me a hardware raid with mechanical disks that outperforms an SSD softraid in raw throughput and IOPS then i'll be impressed.
Once that cache starts filling up with sustained disk activity and it starts flushing to disk it will all go downhill fast.