logo topseosmo.com

@Heady270

Heady270

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : Re: Is an 85% bounce rate measured by Google Analytics a red flag? I am trying to understand the bounce rate concept in Google analytics.I have a blog focused on technical stuff (Java, Spring etc.)

@Heady270

The default Google Analytics installation measures bounce rate as "the percentage of single-page visits or visits in which the person left your site from the entrance (landing) page."

Your 85% bounce rate isn't worrying to me because in my experience GA's default bounce rate measurement doesn't lead to actionable metrics. Many users find what they are looking for on the first page and leave satisfied. By default Google Analytics counts those users as bouncing.

Bounce rate should really tell you how many users leave quickly and unsatisfied. If people are spending 3-4 minutes on a page they really shouldn't count as a bounce.

Luckily you can send extra data to Google Analytics in the form of events to help determine what people are doing on the page. Google won't count users as bouncing if there are additional interaction events in their session, even if the users only visit the one page.

I have a currency calculator where most users land on a page with a JavaScript powered calculator on it. They would perform their currency calculation on the landing page and leave. I implemented GA events for users performing a conversion calculation. My bounce rate fell from 85% to 25% overnight:


Unless you spend time implementing events, I wouldn't pay any attention to the GA bounce rate metric. Most sites have pages where a user can consume just one page and be satisfied. Once you start implementing events, it becomes a much more interesting feature. You can implement events for:


Scroll depth (especially important for article pages)
Time spent on page
Video playback
Loading AJAX content
Typing something on the page
Clicking on things in the page


You don't want to try to game bounce rate by making changes that hurt user experience. For example, it isn't helpful to divide an article between several pages so that users have to click to multiple pages to have to read the article. That technique was popular a few years ago, but it has fallen out of favor because it tends to make some users not come back even though the bounce rate stats may look better.

It is also worth noting that Google doesn't use Google Analytics data (including bounce rate) for determining how sites rank in their search results. Google may use "bounce back rate" where it observes users shortly coming back to the search results after trying your page. See Does a site's bounce rate influence Google rankings? and my answer that talks a lot more about the difference between "bounce rate" and "bounce back rate."

10% popularity Vote Up Vote Down


Report

 query : Re: SEO focused navigation I am a web designer and front-end developer. I almost always utilize a primary navigation bar that clearly demonstrates traditional website pages - "Home, About Us, Blog,

@Heady270

That approach is old school and would come across as over-optimisation. Don't put it in the footer either, as that is also a no-go.

Do optimise for search, the hard part is doing it in a transparent way that anticipates your user's needs. Good luck!

10% popularity Vote Up Vote Down


Report

 query : Re: Can I use the same schema markup for my subdomain? I've already used an Organization schema markup for my main site. Can I use the same for my subdomain (which is a review site)?

@Heady270

Google's John Mueller says that schema should generally be unique and implemented on a page which has the schema data as its main focus: www.searchenginejournal.com/google-structured-data-unique-page/239507/
It sounds like your main domain home page is the correct place for your organization schema. It shouldn't be reused on the sub-domain or even on other pages on the main domain.

10% popularity Vote Up Vote Down


Report

 query : Re: How can the index.html file be served from a sub directory of the DocumentRoot without using system level links? How can the index.html file be served from a sub directory of the DocumentRoot

@Heady270

Your rewrite rule is close. You just need to restrict it a bit more.

RewriteRule ^/?$ /html/index.html [L]


The / matches a literal slash, which is the desired URL for the home page. It is escaped with the backslash.

The question mark makes the slash optional so that the rule can be used in either .htaccess or inside an Apache .conf file.

The ^ matches the start of the URL path and the $ matches the end of the URL path. Putting those on will restrict the rule to just the home page.

If you want to rewrite all .html files to that subdirectory, you could also add the rule:

RewriteCond %{REQUEST_URI} !^/html/
RewriteRule ^/?(.*.html)$ /html/ [L]

10% popularity Vote Up Vote Down


Report

 query : Re: Same Structured Data on multiple pages? I was certain about this at first, but the more I think about it the more uncertain I get. The site is offering 28 educations, and each education is

@Heady270

Google's John Mueller recently weighed in on this:


In general, the structured data on a website or on a page
should be specific to that particular page.
So it shouldn't be something where you kind of say, well,
this is a review of our business in general,
and therefore, we'll put it on all of the pages


So you should only mark up a piece of structured data on one page, and it should be on the page that covers that as the main topic of the page.

10% popularity Vote Up Vote Down


Report

 query : Re: Adding LetsEncrypt to Tomcat/Apache setup with mod_jk I have a Tomcat server running behind Apache using mod_jk (the AJP connector in Tomcat). I just tried adding SSL to this setup with LetsEncrypt,

@Heady270

You can use the JkUnMount directive to prevent a directory from being handled by Tomcat. You can let the acme challenge directory be handled by Apache.

The configuration for that might look like:

JkUnMount /.well-known/acme-challenge/* *


The final * should unmount it from all configured workers. If you want to exclude that directory for just one of several workers, you can use the worker name instead.

I prefer to use reverse proxy over mod_jk with Tomcat, mostly because I find it easier to debug. Excluding directories is similar with reverse proxy. The configuration is:

ProxyPass /.well-known/acme-challenge/ !

10% popularity Vote Up Vote Down


Report

 query : Re: How to specify the business type of a LocalBusiness? (SDTT doesn't recognize a "Gardening" type) Implementing Schema.org LocalBusiness in JSON-LD. The business is for a Gardening. How can I specify

@Heady270

I would make use of sameAs: something like this way:

<script type="application/ld+json">
{
"@context": "http://schema.org/",
"@type": "Service",
"serviceType": "Gardening",
"sameAs": "http://dbpedia.org/page/Gardening",
"provider": {
"@type": "LocalBusiness",
"name": "ACME Gardening Services"
},
"areaServed": {
"@type": "State",
"name": "Massachusetts"
},
"hasOfferCatalog": {
"@type": "OfferCatalog",
"name": "Gardening services",
"sameAs": "http://dbpedia.org/page/Gardening",
"itemListElement": [
{
"@type": "Offer",
"itemOffered": {
"@type": "Service",
"name": "Garden Planting",
"sameAs": "http://dbpedia.org/resource/Garden_planting"

} },
{
"@type": "Offer",
"itemOffered": {
"@type": "Service",
"name": "Garden maintanance",
"sameAs": "http://dbpedia.org/resource/Garden_maintenance"
}
}
]
}
}
</script>

10% popularity Vote Up Vote Down


Report

 query : Re: How many reviews to include in Schema markup (JSON-LD) on product pages? We have product pages with multiple reviews. Some have 200. 300 reviews. Obviously we do not put them all in the page

@Heady270

For products, Google only pays attention to the aggregate rating. See developers.google.com/search/docs/data-types/product
You tagged your question as SEO, but there is no ranking benefit from marking up your reviews. At most you will get rating stars in the search results.

Other types of reviews you might want mark up the actual reviews.


Job postings: developers.google.com/search/docs/data-types/job-posting Restaurants, stores, movies, or books: developers.google.com/search/docs/data-types/review

In cases where you do mark up all the reviews, they don't all have to be on one page. You can break the reviews into multiple pages.

10% popularity Vote Up Vote Down


Report

 query : Re: Is page age a ranking factor? My website is quite new, only about an year old. I publish one post per day. I am seeing a pattern where pages start getting SEO traffic 15 to 30 days after

@Heady270

When you are seeing that it takes Google 15 to 30 days to start sending traffic to a page, that is most likely because it is taking that long for Google to find and index content on your domain. As your domain gains trust, Google will find and index pages you post much more quickly. Google comes and re-crawls your site based on how many inbound links it has. As you gain links, Googlebot will fetch content from your site much more often. Many trustworthy sites, including this one, get their new content indexed within hours.

Google certainly uses page age as a ranking factor, but it isn't quite as straightforward as you might think.

For news queries a newer page is a bonus. Google has a concept they call "query deserves freshness" or QDF. News searches need freshly updated pages to please users and Google prefers those fresh pages for those queries.

For other queries Google usually prefers established pages that have a history of pleasing users. We call that content "evergreen content."

Google constantly collects additional data about your pages. It views how users react to them. It sees if users use the back button to come back to Google after viewing your content. It sees if the page attracts external links. Older pages have much more history. The ones with positive history will rank well.

As your page ages, Google is able to collect more and more data about your page. When your page is high quality, Google will eventually notice that and start trusting it. If it is mediocre quality or there is higher quality competition your page may never rank well.

When you first post evergreen content, it may enjoy a honeymoon period where Google "tastes" the page. Google may try it out on the first page of the results to see how users react to it. It usually doesn't stay there unless it gets a very positive click through rate. After the test, it may fall in rankings substantially, often back several pages. It may take it years to get back to the first page of search results.

10% popularity Vote Up Vote Down


Report

 query : Re: Linkless Mentions & No-follow Links : Potential Future Impact I have been reading in some places that google may consider linkless mentions of a brand as a ranking signal. For example, a popular

@Heady270

I once heard that nofollow links have negative connotations since a link with this attribute will be identified as a reference in which the referrer does not endorse. They simply do not pass value, but not just for what most people believe, it is also because there is a lost opportunity to link or semantically associate both documents (pages).

Mentions on the other hand, reinforce your brand if the citation is within a positive context. Assuming all your website citations are good, you might notice a positive correlation between the number of citations and your website direct traffic, which also has a positive impact in recognising your brand.

Future impact? Nofollow links will definitely generate more traffic and hopefully qualified traffic. Mentions, as long as they are positive, will also have a positive impact in your website traffic. In terms of rankings, only mentions might have a positive impact.

10% popularity Vote Up Vote Down


Report

 query : Re: Google indexes the number of pages of a site differently depending on your browser language. Why? I was checking site:site.com in Google.co.uk and I changed my browser language to German and

@Heady270

Google warns, that the number of urls displayed by site-query is not reliable.

In some cases it seems to be reliable. Google maintains a big amount of data centers, where the indexing database should be mirrored. But it is not always a case. Google has its own queue of data, which should be mirrored first. The number of indexed urls has not the first prio.

Even the number of indexed urls in your search console is not 100% reliable - well, it is, but it has delays too.

The main cause for unreliability of these data is the lack of 100% real time mirroring of all data Google has. But we can't expect this from Google too.

The only art to be sure about the number of indexed urls is to scrape SERPs by your own, with iMacro or the like. For some critical projects with not so much urls it is the best method for me - iMacro with a kind of Windows automation to start a browser and run iMacro.

10% popularity Vote Up Vote Down


Report

 query : Https://www.namecheap.com/support/knowledgebase/article.aspx/385/2237/how-do-i-set-up-a-url-redirect-for-a-domain documents NameCheaps redirecting rules: It is important to note that the value in the

@Heady270

www.namecheap.com/support/knowledgebase/article.aspx/385/2237/how-do-i-set-up-a-url-redirect-for-a-domain documents NameCheaps redirecting rules:


It is important to note that the value in the Destination URL affects where and how the URL is redirected.

Host: www1.example.net
Destination: example.com
Host: www2.example.net
Destination: example.com/
In the first case, www1.example.net will not pass values to the destination URL, so www1.example.com/xyz.html will redirect users to example.com only. Thus, all values that you put in the original URL under your domain name will be left out.

In the second case, www2.example.net/xyz.html will redirect users to example.com/xyz.html (pay attention to the symbol "/" in the configuration). All values that are put into the original URL under the domain name will be included in the destination address and applied in the results.


You have to add the trailing slash on the destination URL to get NameCheap to preserve URL paths.

10% popularity Vote Up Vote Down


Report

 query : Re: What is `/&wd=test` URL that is being requested from my site, probably by bots I'm seeing error logs on a website because something tried to access: example.com/&wd=test the HTTP_REFERER

@Heady270

This is a request from a kind of Baidu searchbot. Baidu is a search engine from China. Like Google has its own searchbot, Googlebot, so Baidu has its own. There is nothing suspicious and dangerous on this request.

If you don't like it, from statistical point of view, you can block it with your robots.txt, like
#Baiduspider
User-agent: Baiduspider
Disallow: /


Or block it with server configuration, like on Apache with:

<IfModule mod_rewrite.c>
RewriteCond %{HTTP_USER_AGENT} baidu [NC]
RewriteRule .* - [F,L]
</IfModule>

10% popularity Vote Up Vote Down


Report

 query : Re: Do you need to credit a thumbnail photo? Say, I use a thumbnail photo, as a smaller or cropped version of the original photo on a site/blog. The thumbnail is used as a preview to the post/article

@Heady270

Several court cases have held that using thumbnail images is fair use when:


They are small enough that they don't don't satisfy the users' desire to see the larger original.
They serve a different purpose than the original, especially when that different purpose is to direct users to the original.


Here is an article written by a lawyer that lays it out in more detail: garson-law.com/thumbnail-images-infringement-or-fair-use/
As far as crediting the thumbnail, when you use something under fair use, you are not required by law to credit that usage. See cmsimpact.org/resource/fair-use-frequently-asked-questions/ That article suggests that credit is important to artists and it is good etiquette to provide the credit. I'd say that your link to the original would satisfy most creators' desire for credit.

This answer was written with US law in mind.

10% popularity Vote Up Vote Down


Report

 query : Re: Will Google recognize a comma as a separator in page titles without spaces? I know that it is allowed in SEO to use comma as separator but, is it ok to add comma as a separator between

@Heady270

You may use any sign as separator, but you should always keep the readability in view. Suffering readability will for sure suffer your SERP CTR - and this would be too bad, to get an impression but to loose the click.

Google will recognize your keywords anyway - it will do it based on words placed befor and after words, which are weirdly separated, like in your example. Compare SERPs on these three searches:

www.google.com/search?q=high+blood+pressure,+hypertension https://www.google.com/search?q=high+blood+pressurehypertension www.google.com/search?q=high+blood+pressure,hypertension

They are the same - ergo Google recognizes them all. But seeing such in the page title would not necessarily motivate to clik it.

10% popularity Vote Up Vote Down


Report

 query : Re: Page-specific skins in MediaWiki? Is there a way to force a particular skin to be applied while displaying specific MediaWiki articles? In my wiki many articles will have a "flip" version with

@Heady270

There is a SkinPerPage extension that serves exactly this purpose: to force a particular skin on a given page.

In short:

(1) Download the extension, unpack files in /wiki-folder/extensions/

(2) Add wfLoadExtension( 'SkinPerPage' ); instruction to your LocalSettings.php

(3) Add <skin>skin-name</skin> tag to the page that you need to show a skin different from the default.

Sweet :)

Tested on the brand new MediaWiki 1.30.0 release.

10% popularity Vote Up Vote Down


Report

 query : Re: Stats in Google Search Console dropped off after chaning domain and moving to HTTPS, but analytics traffic is steady I have two sites where the data in GWT has done this Site 1 past 90 days

@Heady270

You need add the HTTPS version of your site to Google Search Console and verify that. When you move from HTTP to HTTPS, the data no longer appears in the HTTP property that you have been using. It will now only go into the HTTPS property in Google Search Console.

If you want to see the stats for both together, you can create a property set that contains both versions.

10% popularity Vote Up Vote Down


Report

 query : Does Googlebot execute Google Tag Manager? I wanted to understand how Googlebot (and other crawlers) crawl my site. Specifically whether it passes a document.referrer and if it maintains localStorage

@Heady270

Posted in: #Googlebot #GoogleTagManager #Gtm #Javascript #Seo

I wanted to understand how Googlebot (and other crawlers) crawl my site. Specifically whether it passes a document.referrer and if it maintains localStorage keys, so I implemented a script via Google Tag Manager that detects these crawlers and logs data to Logstash.

This is the condition I'm using to detect crawler user agents (returns true for crawlers):

function() {
if(navigator.userAgent.indexOf('robot de Google') < 0 &&
navigator.userAgent.indexOf('Googlebot') < 0 &&
navigator.userAgent.indexOf('bingbot') < 0 &&
navigator.userAgent.indexOf('msnbot') < 0 &&
navigator.userAgent.indexOf('BingPreview') < 0 &&
navigator.userAgent.indexOf('Yahoo! Slurp') < 0) {
return false;
} else {
return true;
}
}


And this is the tag that sends sends a request to Logstash via an image pixel on the GTM Pageview event:

<script type="text/javascript">
(function (d) {

var pagePath = encodeURIComponent(document.location.pathname);
var pageReferrer = encodeURIComponent(document.referrer) || "null";
var userAgent = encodeURIComponent(navigator.userAgent);

var viewCount = Number(localStorage.getItem("preview_view_count")) + 1 || 1;
localStorage.setItem("preview_view_count", viewCount);

var js;
js = d.createElement('img');
js.style = 'display:none;';
js.alt = 'tracking img';
js.src = 'http://MY_LOGSTASH_ENDPOINT_DOMAIN/pixel.gif?EVENT=LogCrawl&USER_AGENT=' + userAgent + '&PAGE_PATH=' + pagePath + '&PAGE_REFERRER=' + pageReferrer + '&VIEW_COUNT=' + viewCount;
d.body.appendChild(js);
})(window.document);
</script>


Now when I look at Logstash, I only see 40 hits in the last 4 days from Googlebot, but Search Console reports ~50,000 pages crawled per day.

Has anyone tried to log Googlebot with GTM before? I'm trying to figure out if there is something wrong with my script, or if Googlebot just doesn't execute Javascript most of the time.

Any ideas are highly appreciated. Thanks.

10% popularity Vote Up Vote Down


Report

 query : No matching DirectoryIndex error with wordpress I have a basic wordpress installation running on a localhost apache server. When I try to fetch the root page, apache gives a 403 error. The

@Heady270

Posted in: #Apache #Linux #Php #Wordpress

I have a basic wordpress installation running on a localhost apache server. When I try to fetch the root page, apache gives a 403 error. The log says:

[Wed Feb 07 13:05:54.519901 2018] [autoindex:error] [pid 2018] [client 127.0.0.1:44307] AH01276: Cannot serve directory /usr/share/wordpress: No matching DirectoryIndex (index.php) found, and server-generated directory index forbidden by Options directive


So, I realize this basically means that it tried to serve the content at /usr/share/wordpress, and it didn't find an index.php file. But I don't understand why it can't find it, because it clearly exists:

>ls -lh /usr/share/wordpress/ | grep index.php
-rw-r--r-- 1 www-data www-data 418 Feb 7 11:08 index.php


My apache wordpress config file (/etc/apache2/sites-available/wordpress.conf) contains the following:

>cat /etc/apache2/sites-available/wordpress.conf
Alias / /usr/share/wordpress
<Directory /usr/share/wordpress>
Options FollowSymLinks
AllowOverride all
DirectoryIndex index.php
Allow from all
</Directory>


So I don't see anything wrong here. The wordpress config tells it to use index.php as the DirectoryIndex, and index.php clearly exists in /usr/share/wordpress.

Even more crazy, if I actually enable the server-generated directory index by doing:

Options FollowSymLinks Indexes


... and then I fetch localhost/, it works with a 200 and gives me the directory index. And indeed, inside the directory index it lists index.php as a file!

But if I fetch localhost/index.php directly, I get a 404.

So, any ideas what I might be doing wrong here?

10% popularity Vote Up Vote Down


Report

 query : Re: Will a permanent redirecting domain appear in Google search results? I have a domain A which will permanently redirect to domain B. May I know if there is a chance domain A will appear from

@Heady270

Google doesn't index pages from redirecting domains. However, Google does the right thing when somebody searches for the domain that redirects. It shows show search results from the target domain. Google uses redirecting domains as a major factor in deciding if a query deserves "do you mean" treatment.

Here is a fictional example that illustrates a misspelling redirecting to example.com. I created this from one of my own redirecting domains, but anonymized it by using the example domain. In my case the alternate domain has a 301 permanent redirect to my main domain. For my sites, this works whether the search includes the .com or just the alternate brand name.

10% popularity Vote Up Vote Down


Report

 query : Re: SEO - switching domains on an established site to a brand new domain I made a new site for a client that is a huge improvement in terms of asset optimisation, page loads, on-page SEO etc.

@Heady270

In your situation, were no manual or algorithm penalty has been given to the old domain I would suggest keeping both websites only if there are important backlinks you can not transfer to the new site.

Having two websites online will payoff if you handle the old one as a lead generation website. Instead of having a content split mindset think it of as a way to segment the old website audience within a mature customer life cycle or sales funnel. You can also split the backlink profile in terms of the segmented sales funnel.

Remove any processing operations from the old domain and leave it just to collect contact information up in the conversion funnel. Any processing operation (payment, data, CRM, Marketing automation) leave it to the new domain.

The business is moving into a new direction? perfect, the old website will serve well those customer that will be later carried over the new business flow.

I will not recommend implementing a short term backlink strategy (aka PBNs) on a new domain, they simply won’t be working as expected and will probably harm your new website credibility.

If you decide to get rid of the old domain, 301s Redirect it’s obviously your best option. Here you can create the redirects by sections of the old website to test and measure the impact and to have a better idea of the risk involved in changing domains.

10% popularity Vote Up Vote Down


Report

 query : Re: Can you tell Google that the same URL is available in 2 languages depending on the browser language? For example, my home page is available in several language and the language displayed depends

@Heady270

In theory you should be able to tell Google that the same URL is available in either language depending on browser language. Google announced about 2 years ago that they would start supporting that.

In practice, I don't know of any sites that have implemented it that way and get good SEO rankings. The recommended way of having a site in multiple languages is still to have separated URLs. Both Google recommends it that way and I do in How should I structure my URLs for both SEO and localization?

10% popularity Vote Up Vote Down


Report

 query : Re: How do I export 7, 14 and 28 day Active users in Google Analytics? For years, we've been able to pull data out of the Active Users report on Google Analytics. Now, when we try to export

@Heady270

I can confirm that the export feature from that report is broken for me too.

Google will probably fix it, especially if enough people report the issue. Go to the report, click on the three dots icon in the top right of the page and select "send feedback". Report that the export is broken.

10% popularity Vote Up Vote Down


Report

 query : Re: Google Analytics double count conversions when there are two goal paths to the same final destination I'm in charge of a website that puts people in touch with financial experts according to

@Heady270

Google Analytics doesn't ensure that every step in a goal path is hit. Rather is automatically assumes that all previous steps were hit if it finds that somebody skipped a step or steps.

So when you have two paths with the same destination, it will count both of them when somebody reaches the destination and it will backfill all the steps of the path not taken.

You can't have two goal paths with the same destination and expect counting to work properly.

To make it work you need different destinations. You could even make it work by appending a parameter to the URL based on which path was taking and make the goal depend on that parameter.

10% popularity Vote Up Vote Down


Report

 query : Re: New domain and URL redirection - SEO risk? I'm considering changing my clients URL. Reasons being the following: - We're rebranding his company - Currently, the domain has the small town, outside

@Heady270

I would like to start by recommending you to avoid 301 redirects chains as much as possible. Redirecting the old domain to the new http domain and then to the https is unnecessary, as far as redirecting the old domain goal concerns. You can definitely do that with your new http to https domain to establish the canonical site.

As long as you do a proper URL mapping from the old domain and create a friendly http 404 page with links to other related pages the risk of loosing page rank will be very low, according to Google cero, nada. To avoid any risk in that department try to maintain what you are doing, you are using 301 redirects in a thoughtful and honest way, that is basically the purpose of 301 redirects, basically declaring that your site has been moved onto another address.

In terms of rankings or in terms of the risk of loosing your position in SERP I believe that you might experiment a lost in the position but with an expected recovery once Google reindex the new site and pass on the authority of the old site. However, You need to be aware of loosing rankings from potentially more than 200 other ranking factors that will have to be reassessed once you have the new website live. You might lose rankings for changing the layout, website structure and design, having new content, etc. You might also loose ranking even by changing servers, the website will have a different location to serve content from, impact in performance will also be playing an important part.

10% popularity Vote Up Vote Down


Report

 query : Re: Clean up hacked site by getting Google to crawl and index only the URLs in the sitemap So recently, our website has been hacked and we're trying to clean everything right now. But, when doing

@Heady270

Google has never limited itself to crawling and indexing just URLs that are in the sitemap. Such functionality does not exist, and I doubt that it ever will.

Sitemaps are fairly useless. They don't help with rankings. They rarely get Google to index pages it wouldn't otherwise index. Google really only uses them to choose preferred URLs, to specify alternate language URLs, and to give you extra data in search console. See The Sitemap Paradox.

You probably don't want to use robots.txt to disallow the URLs either. robots.txt blocks crawling but not indexing. You need to have Google re-crawl the URLs and see that they are gone. Googlebot needs to be able to access the URLs for that.

To clean up your hacked URLs, make sure they now return 404 status. Google will remove each of them within 24 hours of next crawling them. It could take Google a few months to remove all the URLs because it may not re-crawl some of them again soon. See Site was hacked, need to remove all URLs starting with + from Google, use robots.txt?

If there are not too many of the URLs, you can submit them individually through the Google Search Console Remove URLs Tool. That will get Google to remove them much faster than waiting around for the re-crawl, but there is no bulk remove feature.

10% popularity Vote Up Vote Down


Report

 query : Re: HSTS Preload section on .htaccess Recently having moved a site to SSL, I looked into enforcing HSTS for eventual preload. The syntax is approved and the Chrome List allows it to be OK.

@Heady270

RewriteCond %{HTTPS} !=on
RewriteRule (.*) %{HTTP_HOST}%{REQUEST_URI} [R=301,L]


Check whether HTTPS ISN'T on

RewriteCond %{HTTPS} off
RewriteRule (.*) %{HTTP_HOST}%{REQUEST_URI} [R=301,L]


Check whether HTTPS IS off

RewriteCond %{SERVER_PORT} !^443$
RewriteRule (.*) %{HTTP_HOST}%{REQUEST_URI}

Check whether connection runs NOT on secure https port 443

But all of them do the same: check if no https and redirect to https.

<IfModule mod_headers.c>
Header set Strict-Transport-Security "max-age=10886400; includeSubDomains; preload"
</IfModule>


You know already what it does.

10% popularity Vote Up Vote Down


Report

 query : Re: Can we call a site search result page "SERP"? I've worked in SEO company and I'm familiar with SERP term. I've moved to another company and in this company, we have an Elastic Search server

@Heady270

Site search certainly produces "result pages", so it all depends on whether you not you think your site search is powered by a "search engine".

Most of the definitions of "search engine" on the web are broad enough to include site search. They define a search engine as software than searches documents.

dictionary.com


a computer program that searches documents, especially on the World Wide Web, for a specified word or words and provides a list of documents in which they are found.


Mirriam Webster


computer software used to search data (such as text or a database) for specified information


Webopedia


Search engines are programs that search documents for specified keywords and returns a list of the documents where the keywords were found.


Under these broad definitions of "search engine" your site search results could easily be called "search engine result pages" or SERPs.

There is also a second definition of "search engine":

Mirriam Webster


also: a site on the World Wide Web that uses such software to locate key words in other sites


The Balance


A search engine is a web site that collects and organizes content from all over the internet.


If you think of a search engine as a website that provides search results for other sites, then it would seem odd to include site search results in the term SERP.

Your colleague is not wrong to use SERP for site search results based on the broad definition of "search engine" meaning software program. However, I would not use the term myself for it because so many people would think like you and be confused.

In fact, I rarely use the term SERP at all. I prefer just calling them "the results" or "the search results" which is short enough to say and doesn't require unfamiliar people to learn new jargon.

10% popularity Vote Up Vote Down


Report

 query : Re: Recommended approach for slug generation with multiple languages I am building a website that will be available in multiple countries. Each country's content will be unique to that country, but

@Heady270

You'll have better user metrics, if you create slugs in language according to the language version:


users will easier remind about page addresses to visit them twice,
users will faster understand the page's topics reading them in their mother tongue,
in general, you get all benefits of using mother tongue instead of foreign language.


But note!


such setup is very error-prone / fragile,
it is not a kind of trivial task, to get Google correctly index and rank all of your content in all of countries you target.
beside of this, it is a Heidenarbeit, die sich kaum lohnt german description for plenty of work not worth to do.


For the sake of clarity and best possible indexing and ranking i would always prefer an approach:


one ccTLD → one site → one language.


I recommend this from the personal point of view and experience - i've seen sites, i.e. with content for Germany, Austria and Swiss, whose setup was theoretically correct, but Google wasn't able to rank them correctly. German pages was ranking for Austria, austrian pages was ranking for Swiss and so on.

In your case the content on such pages will be the same:

English


mydomain.com/gb-en
mydomain.com/de-en


German


mydomain.com/gb-de
mydomain.com/de-de


And the only reliable method to route users to correct version is based on their Geo-IP-ing and recognizing of their browser language settings. You will be forced to recognize Geo-IP and browser language by first user's visit, let him correct your recognition and save the country and language selection in a cookie. By next visit you check the cookie and are able to redirect the visitor to the site's version saved in a cookie.

Google recommends this method too, but by itself doesn't read cookies and very seldom uses IP another than american one. Google means the only way to crawl multilanguage sites is correct hreflang implementation. In my experience it is never the case where all language versions of a multilanguage site were ranking equally good. But the management affort for such sites is way higher as if you would run a separate site for each country.

10% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme