logo topseosmo.com

@Chiappetta492

Chiappetta492

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : Re: Is this Google proxy a fake crawler: google-proxy-66-249-81-131.google.com? Recently I discover that some variants of a google proxy visits my sites. I doubt these are legal Google crawlers because

@Chiappetta492

When your servers are getting hit by bots, always Google their IP address before blocking them.

A search for "IP address 66.249.81.131" shows that this is an IP is that is owned by Google.

When a search for an IP address doesn't return a company that you want crawling your site, it's most likely safe to block it.

10% popularity Vote Up Vote Down


Report

 query : Re: Site verified a long time ago is now listed in Google Search Console "not verified" I have a problem with Google Webmasters - my main property (https://example.com/) looks like it is not verified.

@Chiappetta492

Have you visited this page?
www.google.com/webmasters/verification/verification?hl=en&authuser=0&theme=wmt&siteUrl=http://example.com
Put your url in instead of example.com

Search Console should tell you why your site is not verified.

If you see methods to verify your site, my personal preference is to click "Alternate Methods" and check the box for HTML file upload. Upload the HTML file to your root directory and verify.

If Google can crawl your site it should be able to verify it with the HTML file upload as long as it's on the server.

I know that you know how to verify a site, but you may need to reverify it. There are some reasons why a site can lose verification. This can happen when a site has been taken down temporarily or servers no longer function. You may need to reverify with a new HTML upload.

10% popularity Vote Up Vote Down


Report

 query : Re: Canonical link to the same page Recently I learned that e.g. Google was not indexing pages that didn't have canonical links, so in the code below, I am setting the canonical link to the homepage

@Chiappetta492

Set the canonical page to the page that you want to have indexed.

If your site has


example.com/news/article/

example.com/news/article

example.com/news/article.html


all as the same page this is a duplicate content issue.

If you put a rel canonical tag of example.com/news/article pointing to your homepage then google won't index your news article page. You want your canonical tag to point to example.com/news/article

10% popularity Vote Up Vote Down


Report

 query : Re: Does GDPR make a distinction between individual and company email addresses? For a B2B company, does GDPR make a distinction in the level of consent required for sending marketing emails to individual

@Chiappetta492

It is difficult to discern what a business email address is vs a private email address. The GDPR is about privacy and unwanted marketing.

If you're sending out marketing to business email addresses that was not requested, you do risk being punished by the new GDPR law in the EU.

People are speculating that the GDPR law doesn't have a lot of specifics and that it just has a general guideline of acceptable business practice. From what I gather, this entails not sending out unwanted marketing in the form of spam to people. I believe that involves not being able to send marketing email to business addresses that is unrequested.

You need to have consent for the EU citizen to send them emails, and you need to store the information that shows evidence of their consent.

Source: I'm not a lawyer.

10% popularity Vote Up Vote Down


Report

 query : PageSpeed Optimiztion I'm not sure if this is the right StackExchange, and if not, I apologise. I've been asked to look after a number of WordPress sites running on a WHM/cPanel hosted VPS

@Chiappetta492

Posted in: #GooglePagespeed #PageSpeed #Performance

I'm not sure if this is the right StackExchange, and if not, I apologise.

I've been asked to look after a number of WordPress sites running on a WHM/cPanel hosted VPS and I'm looking at the page performance using Google's PageSpeed Insights.

Across 6 sites I'm getting advise to "Reduce server response time" which runs at about 1.5 seconds.

If I use 'inspect' from the Chrome right click button and select the Network tab, I can see the TTFB is roughly 1 second. Can anyone advise me what I should be prioritising to fix this.

10% popularity Vote Up Vote Down


Report

 query : Re: Does too many 301 redirects harm SERP rankings In last 15 months, our website have undergone 3 major structural changes. Due to which we had to create thousands of 301 redirects. We used to

@Chiappetta492

This is a tricky one. I'm not sure what has caused Google to index your site. All I can offer are my thoughts.

301 redirects should pass 95-100% of page juice. So this shouldn't cause your website to get deindexed from Google. Google seems to think that 45,000 of your pages are duplicate/alternate content. Have you tried using fetch and render on some of those pages to see if Google can resolve them with a 200 status header? I would also use fetch and render and mobile friendly test to see what content Google can view on that page. Is some of your content missing from Googlebot's view that is causing it to see it as a duplicate page?
www.google.com/webmasters/tools/googlebot-fetch https://search.google.com/test/mobile-friendly



The other possible issue is that a Google algorithm update might have really hit your site. This happens to webmasters sometimes. When Google changes its algorithm, some websites get absolutely destroyed in their rankings. This happened with the Panda and Penguin update. Luckily, there's information on what the general algorithmic changes were. So you can see what updates were made and if it looks like you were doing something that the new update is penalizing, you could potentially change it and regain your rankings. I recommend looking at the algorithmic updates on pages like the following:
moz.com/google-algorithm-change


About 16 months ago, I redirected a website with 50,000 indexed pages from example.com/url/page to example.com/url1 and within a few months, Google indexed millions of my pages and increased my search traffic significantly. I found that the content update played a huge role in increasing my rankings. My content improved and the 301s worked.



Ultimately, I'd look closely to see if Google is able to crawl the page properly with fetch and render, and mobile friendly test. I'd then try to figure how why it thinks these pages are duplicates, and what pages does it think they are duplicates of? Is there a way that you can add more content onto those duplicate pages that will encourage Google to crawl and reindex them? Then I would look closely internally and try to figure out if there was some sort of content or link scheme that I partook in in the past and figure out if an algorithmic update struck me because of it, fairly or unfairly.

I'd also look at crawl errors. Are there any issues with why Google can't crawl your page? www.google.com/webmasters/tools/crawl-errors?hl=en&siteUrl=

10% popularity Vote Up Vote Down


Report

 query : Re: Marketplaces and website SEO I would like to know if this is an issue for SEO if the description and the title of the product is the same on the marketplaces and on my site. Thank you in

@Chiappetta492

I would alter the title and the description and add some alternative content. Google will probably not rank 2 different pages if the title and description are exactly the same. Even if the content on the pages are different, if the title and description are identical this doesn't look that good on the search results.

It's important to put the keywords in the title and description that you want to rank for. So if you're selling "strawberry bubblegum" I would make sure that you have that keyword in both the title and description. But you should have some slight variation regarding the other words that you are using.

10% popularity Vote Up Vote Down


Report

 query : Re: How do I get keywords indexed when text is so complexly styled that words are broken up? I was required to implement a web page that has sections where each has a designed title. Basic structure

@Chiappetta492

I think You can wrap all the divs into one h1 tag. My guess is that Google will essentially see it as <h1> ¿QUIENES SO MO </h1> or as <h1>¿QUIENESSOMO</h1>

<div class="customH1">
<h1>
<div class="quienes">¿QUIENES</div>
<div class="so">SO</div>
<div class="mos">MO<span class="underlined">S</span>?</div>
</h1>
</div>


The client window sees it as:



¿QUIENES
SO
MOS?



In the end I wouldn't obsess over h1-h3 tags too much. They're a keyword ranking signal but Google has learned how to analyze the content on a page far beyond what it could several years ago and so the h1 tags are likely not as important as they once were. I don't think h1 tags are necessary on a site and a lot of my sites don't even have them, although they would probably give it a slight SEO boost.

10% popularity Vote Up Vote Down


Report

 query : Re: SEO Root Page Didn't Index I have a company webpage at "example1.com" and was developing a new webpage at "exemple2.com" (because it was everything already set). When I finished the page, I

@Chiappetta492

It looks like you may have messed this one up pretty good. By getting your example2.com indexed, Google chose to rank that domain over your example1.com. This was because of duplicate content. You should have been using a rel=canonical header pointing to example1.com to keep it in the SERPs.

Now that you have requested example2.com to be deindexed and removed the DNS, you can't even 301 redirect to example1.com or add the headers in for example2.com.

If you can forward example2.com to example1.com as a 301 redirect I recommend that you consider doing so. And if you do, remove that request to deindex example2.com so that they may crawl it and find example1.com again.

It may take some time for your homepage to get indexed by Google. There's no knowing just how long Google will take before indexing a page it has discovered. You may have to play the waiting game. But if you do the aforementioned things it's more likely that it will come back sooner.

10% popularity Vote Up Vote Down


Report

 query : Re: Can only the registrars name server hos the SOA record for a new domain? Say I have registered domain my-domain.example Am I right that only the registrar NS server where I've registered the

@Chiappetta492

Your registrar keeps an SOA record of your domain. The SOA record has information such as your DNS nameservers. Your DNS nameservers are provided to you by your webhosting company.

10% popularity Vote Up Vote Down


Report

 query : Is there a way to get Google search console crawl stats for larger than 90 days? Google search console allows you to see how many pages Google is crawling per day on your site, but it only

@Chiappetta492

Posted in: #CrawlRate #Googlebot #GoogleSearchConsole #SearchConsole #WebCrawlers

Google search console allows you to see how many pages Google is crawling per day on your site, but it only goes back 90 days. You can view your crawl stats here:
www.google.com/webmasters/tools/crawl-stats?hl=en&authuser=0&siteUrl=

A site of mine started picking up over 90 days ago and Google was crawling it rampantly, but I wasn't checking the search console for this domain at the time.

Does Google store its crawl stats for beyond 90 days and is there any way to access this data through search console or a csv file?

10% popularity Vote Up Vote Down


Report

 query : Re: How to change the owner of a .co.uk whose owner has died? Years ago, I registered a .co.uk domain for my mother using my own 123-reg account. I set the registrant name to hers as it was

@Chiappetta492

As far as I understand it, if you have access to the registar's account where the domain is held, you can simply login and unlock the domain for transfer, and then pick which registar and account you want to send it to. In other words, if you can login to your mother's registar account you can probably just send the domain to yourself in an account with your name on it.

10% popularity Vote Up Vote Down


Report

 query : Re: Do robots.txt and sitemap.xml need to be physical files? I have both setup in my routes: Route::get('/robots.txt', function() { // robots.txt contents here }); Route::get('/sitemap.xml', function()

@Chiappetta492

It is possible to have a sitemap.xml and robots.txt file in your root directory and return these files as 404 not found to bots.

I recommend that you search google for a header status page checker. Run your sitemap and robots URL through the checker and see what header status is being reported. If the header status is 404 then there is an issue. You need the status to be 200 (ok).

There can be many reasons why a page returns a 404 status despite existing. A lot of the time it has to do with conflicting code. The most likely reason why a page would return 404 despite already existing is that something in your .htaccess file is changing its status.

10% popularity Vote Up Vote Down


Report

 query : Re: How to properly redirect SEO from an old domain to a new one - should I display a message first? I have a domain, lets say example.com. I am rebranding the website and moving to example2.com.

@Chiappetta492

The best thing to do is 301 redirect all of the pages from example.com to example2.com. So example.com/directory/page1.html goes to example2.com/directory/page1.html.

If you 301 redirect this way you will maintain all of your indexing on all of those pages for the search results. And all of your link juice pointing to those pages will also pass onto the new domain.

If you simply do a javascript redirect with a 3,2,1 countdown like you said, google isn't going to index the pages on example1.com. In fact, it's going to crawl the pages on example.com, see that they are empty pages with content, and deindex all of them. You're going to lose substantial traffic and SEO this way.

You have to 301 redirect the old domain to the new. You can do this in htaccess and/or in PHP.

10% popularity Vote Up Vote Down


Report

 query : Re: Googlebot flooding server with requests for junk URLs with random data I'm having some trouble with GoogleBot. It keeps requesting a random URL that doesn't exist. It is trying to access:www.example.com/index.php/{TOKEN}

@Chiappetta492

I've found that Googlebot crawls URLs on my site that don't exist, doesn't have content and isn't linked from any pages. Studies have shown that it appears that Google is typing words into the search bars of websites and crawling the results of the search.

You can limit the crawl requests that Googlebot makes to your site in webmaster console.

If you feel that 301 redirecting this page back to the homepage isn't helping Google crawl your site, you can set the header status to 403 forbidden on that page. This will potentially stop Googlebot from going there. If it's in a specific directory, you can also disallow robots in robots.txt.

10% popularity Vote Up Vote Down


Report

 query : Re: What is the difference between Google Webmaster Tools and Google Search Console? I see many questions concerning the Google Search Console and Google Webmaster Tools. What is the major differences

@Chiappetta492

It is the same service. The service is called search console and is also referred to as webmaster tools. You have "webmasters/tools" in the URL and "search console" as the title of the page.
www.google.com/webmasters/tools/home?hl=en

10% popularity Vote Up Vote Down


Report

 query : Re: What is the impact on SEO of deleting all content on a site and starting over with a new content management system? I am thinking of hitting the refresh button on my custom asp.net website

@Chiappetta492

In late 2016 I switched platforms on a website and started anew. I 301 redirected all of the indexed URLs from example.com/directory1/ to example.com and I put up the new platform on example.com/directory2/ . I went from about 4k daily organic search traffic in late 2016 to 35k daily organic search traffic in early 2017. The new platform and 301 redirect to the homepage worked for me.

However there are some things worth mentioning. If you can retain the pages in /directory1/ without 301 redirecting to the homepage it will likely be much better. Those pages are already indexed and are receiving traffic. By 301 redirecting them, Google will eventually deindex those pages. However, by 301 redirecting you will retain 99-100% of your link juice. It will just take a while for Google to crawl and index the new pages it discovers.

10% popularity Vote Up Vote Down


Report

 query : Re: Impact of using .resx files to store multilingual content on the S.E.O Could anyone tell me whether google crawlers are capable of reading through the contents of a multilingual website which

@Chiappetta492

I recommend you use Googlebot's fetch and render tool. It will show you exactly how Googlebot sees the webpage with your resx files and whether or not the resx script has an impact on Google's crawling service.
www.google.com/webmasters/tools/googlebot-fetch

10% popularity Vote Up Vote Down


Report

 query : Re: How can I block who.is and archive.org from getting information of my website with htaccess? who.is is a service that gives people whois information of websites and archive.org automatically saves

@Chiappetta492

You can block archive.org from crawling your site in your robots.txt file with

User-agent: ia_archiver
Disallow: /


I believe this will also block archive.org from accessing your site by putting this in your htaccess file:

SetEnvIfNoCase User-Agent "^ia_archiver" bad_bot

<Limit GET POST HEAD>
Order Allow,Deny
Allow from all
Deny from env=bad_bot
</Limit>

10% popularity Vote Up Vote Down


Report

 query : Wordpress: why is there a copy of wp-includes in my wp-includes? So I noticed that in the wp-includes folder of one of my sites there are sub-folders named wp-includes and wp-content that i

@Chiappetta492

Posted in: #Wordpress

So I noticed that in the wp-includes folder of one of my sites there are sub-folders named wp-includes and wp-content that i suspect should not be there. These contain the same sub-folders as the ordinary wp-includes and wp-content. I am thinking one of two things may have happened:


I have accidentally copied those folders there without noticing.
Some wordpress feature or plugin has put copies there for some reason. Here's my question: do you know of any plugin that would do that? A backup plugin like updraft or duplicator perhaps, or maybe Wordpress itself when I upgraded to a newer version? Am I safe to remove those directories?


I discovered it when attempting to use Duplicator and it failed (probably because my hosting service timed out because of too much data) and gave suggestions of files/directories to filter out from te backup process.

10.01% popularity Vote Up Vote Down


Report

 query : Re: How many reviews to include in Schema markup (JSON-LD) on product pages? We have product pages with multiple reviews. Some have 200. 300 reviews. Obviously we do not put them all in the page

@Chiappetta492

I think a webpage should only be applying schema markup to the reviews that are actually on that page. Schema markup helps Google determine what content is on your page. And so applying schema markup to a page that requires a user to click "view more" in order to actually see the marked up content isn't entirely accurate as to the page's content.

Applying schema markup to content on a page that isn't actually on that page is almost like cloaking in a way. I don't know if Google will penalize that, but I don't think it's actually using the schema markup in the way that it was intended and may end up hurting you.

Let's look at an example.

Imagine there is schema markup on a page for "apples", but there is only content on that page for "cupcakes". When a user clicks on the link because of the markup in hopes to see content for "apples", but only finds a page about "cupcakes", the wrong content was served up to the user. Therefore, there shouldn't be schema markup about apples on that page, or content that is not on that page altogether.

10% popularity Vote Up Vote Down


Report

 query : How to use mw.site.siteName in Module:Asbox Exporting Template:Stub from Wikipedia for use on non-WMF wiki, it transcludes Scribunto Module:Asbox which has on line 233: ' is a [[Wikipedia:stub|stub]].

@Chiappetta492

Posted in: #Mediawiki

Exporting Template:Stub from Wikipedia for use on non-WMF wiki, it transcludes Scribunto Module:Asbox which has on line 233:


' is a [[Wikipedia:stub|stub]]. You can help Wikipedia by [',


Substituting Wikipedia with magic word {{SITENAME}} doesn't work here. How to replace Wikipedia for the comparable Lua function mw.site.siteName, so that pages transcluding the stub template shows the local wiki name instead?

10.01% popularity Vote Up Vote Down


Report

 query : How to find out the domains that 301 redirect to my website? How can I find out what domains are 301 redirected to my website? For example if a.com, b.com, and c.com are 301 redirected to

@Chiappetta492

Posted in: #301Redirect #Redirects #Seo

How can I find out what domains are 301 redirected to my website?

For example if a.com, b.com, and c.com are 301 redirected to my website, then how can I know the list of a.com, b.com, and c.com?

10.02% popularity Vote Up Vote Down


Report

 query : Re: SEO - switching domains on an established site to a brand new domain I made a new site for a client that is a huge improvement in terms of asset optimisation, page loads, on-page SEO etc.

@Chiappetta492

Your client can switch domains and 301 redirect all of his ranked pages to the new domain as you said. As long as you are doing example.com/page.html 301 redirects to example.org/page.html and not 301 redirecting to the root example.org

301 redirects retain 95-99% of the link juice that was originally passed. So your client should not lose much ranking in doing it this way.

If you leave both sites up for too long, google will see this as a duplicate content issue and may not know which site to rank on its SERPs, thus diluting his ranking between 2 pages. It is imperative that you quickly 301 redirect before this becomes a factor.

You can also choose to use the rel=canonical tag if you want to leave both sites up for a while, and this should show Google that you are moving the site to the new domain.

Overall, it can be unwise to change domains unless necessary. But if the domain has a much better name than it may be worth it. Moz-dot-com changed their website's domain years ago, shortening it to Moz. It was a wise decision. They 301 redirected all of their pages onto to the new domain and retained their rankings. And Moz is one of the most trusted sources in SEO information on the web.

10% popularity Vote Up Vote Down


Report

 query : Re: Google indexes the number of pages of a site differently depending on your browser language. Why? I was checking site:site.com in Google.co.uk and I changed my browser language to German and

@Chiappetta492

The only true way to determine how many pages are indexed by Google is in the Google webmaster console here: www.google.com/webmasters/tools/index-status?hl=en&authuser=0&siteUrl=
Using site:example.com in your search is only so reliable in determining how many pages are actually indexed.

It also makes sense for Google to index pages with German language in their German search engine, to index English pages in the American search engine, and to index Indian pages in the Indian search engine.

It wouldn't make much sense for Google to index pages with German language in their Indian search engine.

10% popularity Vote Up Vote Down


Report

 query : Add page in another directory to the sitemap in root So I recently added a blog section to our website. The main site is not a wordpress site. I added the blog section by creating a new

@Chiappetta492

Posted in: #GoogleAnalytics #Seo #Sitemap #Wordpress

So I recently added a blog section to our website. The main site is not a wordpress site. I added the blog section by creating a new folder called blog and installing wordpress in it. Added the appropriate link to the navbar and I'm good to go. mysite.com/blog is up and running.

I want to be sure that the entire site is benefiting from the content in the blog section, so I generated a new sitemap, but it is not picking up anything from the blog folder. The sitemap looks exactly the same as it did before I made the blog. Can I manually add the blog page?

How do I go about tying together the root directory with the the blog directory? I'm also wondering about this for use with Google Analytics. For example, there used to be a shop.mysite.com. However it was a completely different root folder... so once someone navigated to the shop they were lost to GA. I'm hoping to have everything as consolidated and buttoned up as possible so that we're not repeating mistakes.

Thanks for any help.

10.01% popularity Vote Up Vote Down


Report

 query : Re: Page-specific skins in MediaWiki? Is there a way to force a particular skin to be applied while displaying specific MediaWiki articles? In my wiki many articles will have a "flip" version with

@Chiappetta492

I'm not certain but I believe that skin preference is the only way. If you know how cookies may be a good idea for page specific skins but it would require quite a bit of setup.

10% popularity Vote Up Vote Down


Report

 query : Do we keep appending new domains in the Google's disavow tool or Just add new entry every time? I keep getting tons of bad links every week for my domain. My question is, As I find bad

@Chiappetta492

Posted in: #DisavowLinks #Google #GoogleSearch #GoogleSearchConsole #Serps

I keep getting tons of bad links every week for my domain.

My question is, As I find bad links every week, Do I need to keep appending the new bad links in disavow file or Just keep only a new finds in the file and then submit every week?

10.02% popularity Vote Up Vote Down


Report

 query : Re: How to batch remove spamming users and pages they created on MediaWiki? I'm trying to clean up a MediaWiki instance which has been subjected to spamming and vandalism for a period of time.

@Chiappetta492

I suggest installing Extension:BlockAndNuke to stop spam. It lets you instantly block users as well as nuke their contribs. A whitelist of legitimate users can be provided to be exempt from being shown on the list of nukable users. You'd whitelist your allowed users then use ctrl-a to select all and click the relevant button to block and nuke the user.

10% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme