Shivamyadav's avatar

Can 28 index keys make the 500 rows data to slowdown?

can database table columns 28 index slowdown the 500 data import from csv in Laravel also when I update those already added 500 data it takes half time of the import 8 to 9 minutes.

There are many other things are happening like creating parent division, children division auto assign the user event and courses etc stuffs.

0 likes
10 replies
JussiMannisto's avatar

The number of retrieved rows isn't that relevant. What matters is what the queries do, and what indexes they're able to utilize.

How big is the database table?

28 indexes is quite a few (do you really need all of them?) but they wouldn't cause anything like what you're describing. They'll slow down inserts a bit because the indexes have to be updated, but not selects.

You need to debug which queries or events are causing the issues. If this occurs in development, use something like Debugbar and check the Timeline and Queries tabs. Or use Telescope. In production, Pulse will show you the slow queries.

My guess is that either the database queries are poorly designed, they don't utilize indexes effectively, or both. The fact that there are 28 indexes on a table makes me very suspicious. Are these indexes actually designed with the queries in mind, or are they just slapped on every column?

Most likely this issue could be solved by fixing the query or adding a single composite index that's fit for purpose.

1 like
Shivamyadav's avatar

Thanks for the reply dude. I will let you know tomorrow after using the Laravel debugger in dev mode.

JussiMannisto's avatar

I misread your post. I thought you were talking about exporting to CSV.

Anyway, inserting 500 rows should not take anywhere near that long, even with 28 indexes.

Batched imports would probably speed things up, if you're not already doing that. But there must be a bigger underlying reason why the import takes so long. More information is needed.

Tray2's avatar

Yes, it will likely slow down the import.

You should only have as many indexes as you need, and as few as you can get away with.

Quote from Aaron Francis.

Think of an index as a alphabetically sorted box of cards, and the box is then divided into section A - Z. So every time you need to insert something into the index, you need to first find the section, and then the exact place for it within the section. Now imagine doing that 28 times for each card.

If you are then using MySQL or MariaDB then each time you insert something and the index is refreshed, the table is locked, so you can't insert into it until all indexes are up to date. This will be fairly fast per index, but 28 is a lot, and the indexes will likely cause deadlocks between each other.

Indexes should basically only be used on column that you use in WHERE, ORDER BY, and foreign keys.

2 likes
Shivamyadav's avatar

Thanks 😊 for the suggestions. I will check one more time with your suggestions and let you know.

Snapey's avatar

If you were inserting 500 rows one by one into a table of 2 million rows with unique constraints then it would slow it considerably.

It would also be slow if for each insert you were also querying other tables to resolve foreign key values.

But as you are providing next to zero information, its hard to guess.

Shivamyadav's avatar

Sorry 😔.

I will provide the information once I am on my machine.

For now my senior has written the whole code and and was optimising it and find that the creation of the user was the problem with many things like creating user stripe account and divisions, experts etc.

So I just put that in a queue and imported the user's again and now it's taking 8 minutes.

Without the job (as if I commented the job) it was taking one 1.20 min and added the job it's taking that 8 minutes like resolving the jobs dependency and serealizing the constructor payload and I am using the AWS SQS.

martinbean's avatar

Can 28 index keys make the 500 rows data to slowdown?

@shivamyadav Indexes are “look-ups”. If you create lots of indexes, then that’s lots of look-ups that need to be checked, which take time, which then defeats the point of using indexes.

You should only be creating indexes for columns that are frequently used for retrievals. If just indexing every column could make all look-ups faster, then everyone would just index every column and render separate indexes useless.

1 like
wordxpression's avatar

There are many other things are happening like creating parent division, children division auto assign the user event and courses etc stuffs.

Try to isolate the problem in a test environment. I can't imagine applications where a table needs 28 keys, usually that many keys is the result of poor database design. But first disable in your test environment all other db interactions, then try the import with 28 keys, slowly add the other features, and the culprit will show itself.

But personally, my first step would be which indexes are really needed. And remember, an index will only speed the search when there is a lot of different values. An index on a boolean col for example is totally useless.

jlrdw's avatar

Also have the designer of that database take a course in database management. I guarantee you there is no way 28 indexes are needed. Maybe 3 or 4 at the most.

Perhaps the table needs broken up to related data.

Please or to participate in this conversation.