Good one.
What I am missing is, for example, if a package is rated medium or low, what is the reason for this score? Like, no documentation, old PHP version ...
This will make the results/scores more transparent.
Hi everyone,
I’ve been working on a site to help fellow Laravel developers, and I’d really appreciate your feedback and suggestions to make it better.
The site is laraplugins.io. Its goal is to help developers quickly and easily assess the health and reliability of Laravel plugins—ideally in under five minutes, without needing to dive into the GitHub source code.
Right now, everything is still a work in progress, so any suggestions on how to improve the site are welcome.
A quick note: There’s no AI review system in place. All scores and insights are generated algorithmically to ensure consistency and a reliable experience.
P.S. I’m planning to add an MCP (Model Context Protocol) server to the roadmap. This will allow AI assistants and agents to search and evaluate plugins before deciding to install them.
Please or to participate in this conversation.