I am using the current version of the Amazon Web Services PHP SDK. I am attempting to upload a file to S3 using the multipart uploader. It's working like a charm!
Almost.
Here's the most relevant portion of code.
$uploader = UploadBuilder::newInstance()
->setClient($s3)
->setSource($file_to_upload)
->setBucket($bucket)
->setKey($key)
->setMinPartSize(25 * 1024 * 1024)
->setOption('ACL', 'public-read')
->setConcurrency(3)
->build();
$uploader->upload();
Anyway. It works great. However, the RackSpace server I am using is kinda retarded, and it kills a script that is running too long without creating any output. This tends to be a problem when uploading very large files. Flushing data solves this issue, but... I can't for the life of me figure out how to output data mid-upload!
AWS has to support this, right? If so, what would be the simplest way of achieving this effect?
The uploader emits several events that you can hook into. It's really not documented, so you may want to take a look at the code for Aws\Common\Model\MultipartUpload\AbstractTransfer
.
There are 6 events that you can use:
multipart_upload.before_upload
multipart_upload.after_upload
multipart_upload.before_part_upload
multipart_upload.after_part_upload
multipart_upload.after_abort
multipart_upload.after_complete
To register a listener for the event, you can do something similar to following before $uploader->upload()
. In this listener, you can do whatever you want.
$uploader->getEventDispatcher()->addListener(
'multipart_upload.after_part_upload',
function($event) {
// Do whatever you want
echo $event['state']->count() . " parts uploaded.\n";
}
);
There are a few things passed into the $event
object available to the listener. Look in the code to see what data you'll receive.
I am using the current version of the Amazon Web Services PHP SDK. I am attempting to upload a file to S3 using the multipart uploader. It's working like a charm!
Almost.
Here's the most relevant portion of code.
$uploader = UploadBuilder::newInstance()
->setClient($s3)
->setSource($file_to_upload)
->setBucket($bucket)
->setKey($key)
->setMinPartSize(25 * 1024 * 1024)
->setOption('ACL', 'public-read')
->setConcurrency(3)
->build();
$uploader->upload();
Anyway. It works great. However, the RackSpace server I am using is kinda retarded, and it kills a script that is running too long without creating any output. This tends to be a problem when uploading very large files. Flushing data solves this issue, but... I can't for the life of me figure out how to output data mid-upload!
AWS has to support this, right? If so, what would be the simplest way of achieving this effect?
The uploader emits several events that you can hook into. It's really not documented, so you may want to take a look at the code for Aws\Common\Model\MultipartUpload\AbstractTransfer
.
There are 6 events that you can use:
multipart_upload.before_upload
multipart_upload.after_upload
multipart_upload.before_part_upload
multipart_upload.after_part_upload
multipart_upload.after_abort
multipart_upload.after_complete
To register a listener for the event, you can do something similar to following before $uploader->upload()
. In this listener, you can do whatever you want.
$uploader->getEventDispatcher()->addListener(
'multipart_upload.after_part_upload',
function($event) {
// Do whatever you want
echo $event['state']->count() . " parts uploaded.\n";
}
);
There are a few things passed into the $event
object available to the listener. Look in the code to see what data you'll receive.
0 commentaires:
Enregistrer un commentaire