First of all, in any application, Google not checks for whether you allow crawling or not. It reads the robot.txt file. If the URL of your site is marked as disallowed, then Googlebot doesn’t make an HTTP request and ignores the URL.
It then phrases the response for different URLs. If you want to prevent the link discovery on your website, you can apply the no-follow link attribute.
It will help if you understand that pre-rendering is an excellent idea as it would make your website fast loading for both crawlers and internet users.
Unique snippets and titles
Use meaningful HTTP status
Making use of meaningful HTTP status is also equally important so that the Googlebot understands whether a particular page should be crawled or not. You can use a significant code or HTTP status code to tell Googlebot if your website has moved to somewhere else.
Avoid 404 errors
Meta robot tags
If you follow the MR tag links, it will help to prevent a page from indexing on a specific web page.
For more information you can contact an SEO expert.
What is the right way to creating links?
Ignoring the href attribute
Thinking about the fragment identifiers
Hashtag distinguishes the fragment identifiers on a page. Fragment identifiers are used for pointing the subsection of a page and not the different content of the page. If you use them, then crawlers will ignore them, and they will show that there is no fragment in your site. It also means that if you create apps using a fragment identifier, then crawlers would ignore those links.
If you create links that Google can easily crawl, then the search engine would easily understand the type of content you are creating. The chances of you getting a higher rank in Google will also increase.