@jonjie its a bit of a battle as you want some bots to index your site, eg Googlebot. but on the otherhand you don't want other bots from indexing and scraping content.
one of the best ways to is to make content accessible for users only, even if they get a free account.
if you are dealing with non logged in users, then you should add disallows within your robots.txt file. If they play nicely bots will ignore this content.
and also, convert pages to images, have random html tags and tags' parameters, ban ip if ..., ban type of browsers, ..... but as automica wrote it's real battle and you never win, it always possible to do a screen copy and ...
or totally other way, try win/win approach, let know that your data are easily available if for instance 'user' reference your site, ...
To be honest, you cannot really prevent web scraping on your site.
If it is publicly available, it is scrapable. If the scraper has sufficient motivation, they will be able to do it.
@snapey do you have more explanation why or how does it make more difficult? I think that's the best solution for today; just make it difficult but just need more explanation
@jonjie I would guess having incremental ids makes it easier for a bot to guess your url structure and step through your pages. if you have non-increment ids or use slugs, then its harder work as it will involve a spider.