Be part of JetBrains PHPverse 2026 on June 9 – a free online event bringing PHP devs worldwide together.

rhand's avatar
Level 6

DO Spaces states 180GB / S3CMD 8GB

I cleaned up Digital Ocean Spaces and added a lot in the trash. How when I check things from the terminal with s3cmd I see

s3cmd du -H s3://bucket_name
   8G     155 objects s3://bucket_name/
s3cmd multipart s3://bucket_name
s3://bucket_name/
Initiated	Path	Id

Only 8GB stored, no multipart parts hanging around it seems... However Digital Ocean Spaces states I have 188.4 GB ●37958 items for that bucket. What am I missing here?

0 likes
9 replies
LaryAI's avatar
Level 58

It's possible that the files you deleted are still taking up space in Digital Ocean Spaces' trash. You can try permanently deleting them from the trash to see if that frees up space.

To do this, you can use the following command in the terminal:

s3cmd rb --recursive s3://bucket_name/trash

This will recursively delete all objects in the trash for the specified bucket.

If this doesn't solve the issue, it's possible that there is a delay in Digital Ocean Spaces updating its usage statistics. You can try waiting a bit and checking again later.

1 like
rhand's avatar
Level 6

Tried that delete trash command rb --recursive but it failed

s3cmd rb --recursive s3://bucket_name/trash
ERROR: Parameter problem: Expecting S3 URI with just the bucket name set instead of 's3://bucket_name/trash'
s3cmd rb --recursive s3://bucket_name/trash/
ERROR: Parameter problem: Expecting S3 URI with just the bucket name set instead of 's3://bucket_name/trash/'

I think rb is for removing a bucket with all inside of it when used with --recursive. Not sure if that can work on trash.

rhand's avatar
Level 6

DO Spaces suggested running aws s3api list-multipart-uploads --bucket bucket_name to list incomplete Multipart Units as that should be more reliable than s3cmd but not sure how to run this type of command line app on my Mac.

rhand's avatar
Level 6

Okay, installed awscli and now have this issue

aws s3api list-multipart-uploads --bucket bucket_name

Could not connect to the endpoint URL: "https://smtapp.s3.AMS3.amazonaws.com/?uploads"

so tried adding endpoint:

aws s3api list-multipart-uploads aws --endpoint=https://ams3.digitaloceanspaces.com --bucket bucket_name

usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
  aws help
  aws <command> help
  aws <command> <subcommand> help
Unknown options: aws`

but that did not work yet either. Then I saw the typo duplicate aws so did

aws s3api list-multipart-uploads --endpoint https://ams3.digitaloceanspaces.com --bucket bucket_name

so nothing listed at all. With debug I saw this

aws s3api list-multipart-uploads --debug --endpoint https://ams3.digitaloceanspaces.com --bucket bucket_name
2023-04-15 09:42:06,446 - MainThread - awscli.clidriver - DEBUG - CLI version: aws-cli/2.11.13 Python/3.11.3 Darwin/22.4.0 source/arm64
2023-04-15 09:42:06,446 - MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['s3api', 'list-multipart-uploads', '--debug', '--endpoint', 'https://ams3.digitaloceanspaces.com', '--bucket', 'bucket_name']
...
2023-04-15 09:42:06,527 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): ams3.digitaloceanspaces.com:443
2023-04-15 09:42:07,268 - MainThread - urllib3.connectionpool - DEBUG - https://ams3.digitaloceanspaces.com:443 "GET /bucket_name?uploads HTTP/1.1" 200 None
2023-04-15 09:42:07,269 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-request-id': 'tx000000000000005a03780-00643a0eff-3790532c-ams3a', 'content-type': 'application/xml', 'date': 'Sat, 15 Apr 2023 02:42:07 GMT', 'strict-transport-security': 'max-age=15552000; includeSubDomains; preload', 'transfer-encoding': 'chunked’}
...
/bucket_name
uploads=
host:ams3.digitaloceanspaces.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20230415T024206Z
...

And got nothing listed either. Did see uploads mentioned in debug so perhaps I need to add an object ID or something. Not sure yet how.

Do however think it is 190GB of which I only really have 10GB now and some stuff is in the trash. How to empty trash?

Tray2's avatar

I usually use the df -h command to see how much space I have.

You will get something like this

df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.6G  163M  1.4G  11% /run
/dev/sdb2       109G  109G     0 100% /
tmpfs           7.8G     0  7.8G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/sdb1       511M  5.3M  506M   2% /boot/efi
/dev/sda1        11T  7.5T  2.9T  73% /mnt/raid
tmpfs           1.6G  4.0K  1.6G   1% /run/user/1000

That way you will see where the space is consumed.

https://opensource.com/article/18/7/how-check-free-disk-space-linux

You can also use the du -a /home | sort -n -r | head -n 5 to find the top 5 directories sizes

https://www.tecmint.com/find-top-large-directories-and-files-sizes-in-linux/

1 like
rhand's avatar
Level 6

@Tray2 At the start I did use

s3cmd du -H s3://bucket_name
  16G     276 objects s3://bucket_name/

but according to Digital Ocean Spaces there are incomplete multipart upload units that are not shown this way. Partials of sorts that take up space causing it to be 195+GB.

Was using the command line tool s3cmd and later on suggested by DO aws to list these files. Cannot ssh into a bucket.. So far no luck locating these mysterious partials. But DO Spaces said they are escalating the issue with their engineers now.

Stil think the display of items in their trash is needed somehow and a way to empty that. I threw away a lot of items from their UI, but still have way more left than I added after..

rhand's avatar
Level 6

It seems to have been solved as I only see 16.9 GB in Digital Ocean Spaces panel as well as 285 objects and the same using the command line tool s3cmd:

s3cmd du -H s3://smtapp
  16G     285 objects s3://bucket_name/

Only have not been told how they solved it yet.

Tray2's avatar
Tray2
Best Answer
Level 73

@rhand Could be that they have some batch job running once a day, and cleans out the trashed files after a few days.

rhand's avatar
Level 6

@Tray2 Yes, that was probably it. Too bad I never managed to see the trash directory using the command line tools mentioned, but now all good.

Please or to participate in this conversation.