Error when 2 files are being created at the exact same time

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
Hello.

The file table has an unique index on the shortUrl field. This field is filled with the string 'temp' by the moveIntoStorage function in the file class. The script doesn't use transactions but mysql queries are atomic. When 2 files are being processed at the same time, one of them will manage to make a insert with shortUrl = 'temp'. The second file will try to insert a new row with shortUrl = 'temp' before the first file manage to create the correct shortUrl and the insert will fail because the index doesn't allow 2 rows with shortUrl = 'temp'.

Also, this is how the script checks the insert: if (!$dbInsert->insert()). If you look at the insert function you will see that it tries to execute the insert and them returns the last inserted id. But this last inserted id isn't necessarily from the last insert that wasn't successful. If the insert fails it will return another id from another insert executed earlier.

This error is not easy to reproduce. It took me all morning to find it.
 

sukhman21

Member
YetiShare User
YetiShare Supporter
Jan 26, 2015
508
3
18
if i understand this correctly, so 2 files get uploaded and finishes at the same time will create issues ? the second file will restart the upload process..
I have actually seen this but was never able to figure this out.. So what happened to me was, while i was uploading files - i see some of the files would upload like 80% or even 100% and they would restart the upload on that same file again...

This is actually an issue because say if someone uploaded 2GB of file and the upload restarts or errors out, then the user will need to reupload the whole file again...

It would be nice if the upload script creates temp, temp1, temp2, etc.. as needed..
 

enricodias4654

Member
YetiShare User
Jan 13, 2015
411
1
16
I didn't test it with direct upload, I tested with the remote upload using multiple file servers. Reproduce it with the direct upload would be even harder.

Replacing 'temp' with 'temp_'.mt_rand(1, 1000000) will work. Removing the unique index from the table will also work. The $dbInsert->insert() on the other hand needs some modifications.

If the insert fails, $dbInsert->insert() will return the id of the last successful insert. In the remote upload it may return the id of the row in the remote upload table. The script will then update the shortUrl of another file with this id.