logo topseosmo.com

@Bryan171

Bryan171

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : Ssls.com selling PositiveSSL for .88 All sites selling the basic SSL is at least . What could be the catch is in this offer? I've already purchased 5 of them for 2 years. Another good

@Bryan171

Posted in: #SecurityCertificate

All sites selling the basic SSL is at least . What could be the catch is in this offer? I've already purchased 5 of them for 2 years.

Another good thing I see is that after buying, we can activate them even any time, for example say after 5 years as if it is bought exactly at that time.

What could be the catch here?

Should I go ahead and buy a dozen of them for future use for my dozen sites?

10% popularity Vote Up Vote Down


Report

 query : Re: Google doesn't seem to update the description or title of my homepage Before we launched our website, we had set up a "coming soon" page and google picked up the title and description from

@Bryan171

You can manually request indexing the new version with google search console.


Login to Search Console
Choose your property
Navigate to "Crawl"
Navigate to "Fetch as Google!
Click "Fetch"
Wait until the request appears in the table (a few seconds)
Click "Request indexing"


This will tell google to have a look at this page and index the new version.

10% popularity Vote Up Vote Down


Report

 query : Re: Install WordPress in a subdirectory of MODX CMS For a client that need a re-design and a Plugin that exist only for WordPress I would like to install WordPress in a subdirectory of a website

@Bryan171

I would like to install WordPress in a subdirectory of a website that is built with MODX CMS.


God, why in the world do you want to do so? Both sites are using htaccess to create SEF urls - you will be never able to correctly debug things if they gett wrong. And in my experience they will. htaccess is very fragil construction dependeing even from an order you write rules into it.

Don't create a source for a guaranteed additional headache. Create a directory on the same directory level as MODx, put Wordpress into it and you are on secure way.

And, for better management, put Wordpress into specially created database - not into the same, where MODx MySQL is placed.

10% popularity Vote Up Vote Down


Report

 query : Google analytics website - install attribution on PC I'm looking for a way to track installation (or actually launches) of a PC game and correlate them with information from my website google

@Bryan171

Posted in: #GoogleAnalytics

I'm looking for a way to track installation (or actually launches) of a PC game and correlate them with information from my website google analytics (such as traffic source, landing url, etc).

I'd like to track conversion rates. I can add HTTP/API calls from my desktop application.

Any ideas on how I can do this?

10.01% popularity Vote Up Vote Down


Report

 query : Re: How to improve SEO ranking of keyword with least content on the webpage? I have a challenge to rank my page for few keywords. But the dilemma is my page is bound to have very less content

@Bryan171

You can not simply rank a login page for a few keywords without context, the login page should not be your main page. Think about it, as an user I will be abducted from my searching experience to land in a page that will raise more questions rather than answering them.

Create some context on the chat environment, build a funnel and make sure people searching for X will understand what to do, what is it for and what will be their benefits. That’s how you rank.

Do not think meta title and description tags will make your page rank, use them to grab people's attention which in return will give you better rankings.

Do not make content invisible period

10% popularity Vote Up Vote Down


Report

 query : Re: How do I get Google to re-crawl my website and index only my new URLs? When I search my website on Google I get URLs like: http://example.com/article.php?movie=150 And my new url is: http://example.com/online-stuff/150/tit

@Bryan171

If you properly 301 redirected all of your old urls to your new urls Google will eventually see this and update the urls in the search results.

If Google isn't crawling your site as much as you like, some simple rules to get it to crawl more frequently is to produce amazing content that it wants to send users to and to have a lot of links pointing to your domain.

The more relevant Google thinks your site is the more frequently it will crawl it. It uses all of the SEO on page and off page signals to determine this.

Sometimes, it just takes a little time.

10% popularity Vote Up Vote Down


Report

 query : Re: Responsive web design causes social media buttons to overlap the logo on smaller screens I haven't developed a website in quite some time, and am currently working on one for a friend's small

@Bryan171

Use @media in your CSS. It allows for you to have one design for desktop screen sizes and one design for mobile screen sizes. You have to use the meta viewport tag as well which it appears you are using.

@media screen and (max-width: 799px) {
.header {
}
}
@media screen and (min-width: 800px) {
.header {
}


The above code will allow you to make the share icons and logo smaller with the screen size max-width set.

10% popularity Vote Up Vote Down


Report

 query : Re: Google Search Console - Submitted URL not selected as canonical We have 3 product pages in which we have added some internal links (PDF links), the crawl report Google search console is showing

@Bryan171

If the PDF content is too similar to your html content you can prevent google from crawling these pages by adding rel="nofollow" to your links pointing to the PDF files.

I think you can also prevent Google from crawling these pages by adding PDF as a parameter here www.google.com/webmasters/tools/crawl-url-parameters?hl=en&siteUrl=
You can also change the http header for PDF files to canonical. Here is one method:



You can also write the header canonical in PHP and call the PDF file:



You can learn more about it here:
moz.com/blog/how-to-advanced-relcanonical-http-headers

10% popularity Vote Up Vote Down


Report

 query : Generating backlinks through a javascript video widget without getting manual action penalties? I'm finishing an API project that will allow website owners to display related videos on all of their

@Bryan171

Posted in: #Backlinks #Embed #Links #Seo #Widgets

I'm finishing an API project that will allow website owners to display related videos on all of their pages with a simple javascript code. The related videos are completely related to the page's content. And the videos display on the same page as they were clicked on so that the visitor doesn't leave the page. This will hopefully increase the website owner's metrics of time on site, session duration and reduce their bounce rate while adding rich content to their page.



The purpose of this API project was to generate backlinks to my site to increase my web presence. But now I'm reading that widget links can get hit with manual actions from Google along with penalties. And I really just don't understand any of this.

If a widget is so good that a website is willing to embed it or run the javascript with the followed backlink inside of the widget, wouldn't Google see this as a signal that the followed link must have really good content on their website? Why would they hit content developers with penalties like this?

I can offer the javascript widget with a rel="nofollow" backlink inside of the widget, but because of server costs and constraints the only way it will be worth it is if I can get a followed backlink somewhere on the user's website. Which is fine, I can request that they editorially place a followed link somewhere on their site. But the problem is that because of the https protocol that many sites use, I don't see of a way to track which websites are embedding my javascript code and using my data. As a result, if an https website is using my javascript widget a lot without linking to my site I'm unable to restrict access to their domain. I simply won't know which domains are doing that.

I could check my apache logs to determine which IP addresses are calling my javascript code, but I imagine that this would be very difficult to figure out which IP address relates to which web domain. And since https doesn't pass http_referer in PHP I'm unable to track which websites are using my data.

Is there a solution to this issue and a way for me to be able to offer this really useful widget with highly related video content to websites while being able to ensure that I receive some sort of followed link in exchange for the dramatic server usage that my widget will require on my servers?

The code utilizes .load('http://example.com/page') where example.com is my domain. It would be really preferable if I don't have to offer API secret keys because I'm targeting wordpress users and the learning curve for signing up and implementing API access keys could deter widget adoption.

Can I determine which https websites are using my script with some sort of code? Or is there any other advice about how I can implement backlinks without risking penalty?

Here is an article about widget backlinks from Rand at Moz. It has great info, but I don't think it answers definitively if branded logo backlinks in widgets are still safe. moz.com/blog/backlinks-maximize-benefits-avoid-problems-whiteboard-friday
Thanks

10.03% popularity Vote Up Vote Down


Report

 query : Re: On-page Markup and Sitemap Multi-language I have: example.com/en/product-a-en example.com/es/product-a-es example.com/ru/product-a-ru If i use this: On-page Markup Use the lang attribute in the html

@Bryan171

The way I see it, sitemap.xml is used to tell robots what URLs you want them to crawl the most. It helps Googlebot and other crawlers navigate your site in the way you want them to. If your priority is to crawl and index your alternate language pages then adding them to your sitemap over only English pages will help prioritize their crawling.

If you do not add them to your sitemap, Google should be able to discover and crawl your language alternate pages, especially if they are linked to without the rel="nofollow" attribute.

But if your sitemap currently has all English language pages and you want your alternate language pages to be crawled more frequently, you should strongly consider adding them to your sitemap. Because if you don't, Googlebot and other crawlers might decide to prioritize your English pages in your sitemap over your alternate language pages.

10% popularity Vote Up Vote Down


Report

 query : Re: How to automate custom top level domain registration for SaaS? I am developing a SaaS that will offer clients the ability to create a corporate identity website (insert content, upload logos,

@Bryan171

Top level domain registration is performed through registrars that are verified and approved by icann.org . In order to become a registrar you need to sign up with icann and I believe the fee can be upwards of 0,000 per year.

You can automate top level domain registration through reputable 3rd party API requests provided by icann registrars. These APIs should allow you to register domains through your website and manage the DNS servers from your website. This should allow you to then upload the website files to your server and the corresponding domain names.



Here are some domain registration APIs that might help you complete the process that you are looking for.
www.enom.com/reseller/ www.namecheap.com/support/api/intro.aspx docs.aws.amazon.com/Route53/latest/APIReference/requests-rpc.html

10% popularity Vote Up Vote Down


Report

 query : Re: Is page age a ranking factor? My website is quite new, only about an year old. I publish one post per day. I am seeing a pattern where pages start getting SEO traffic 15 to 30 days after

@Bryan171

Domain age is a ranking factor according to SEO experts. But I have never seen page age being listed as a ranking factor.

If anything, Google often tries to show fresh and new content to its users. It will often rank articles that were written in the past week over articles that were written 4 years ago. This is because Google understands that older articles are often outdated and are no longer as relevant.

Fresh content is more likely to rank over older content.



It may take some time for Google to crawl your page, which could be a reason as to why it ranks better a few weeks after it has been posted. There are a few ways to increase the likelihood for Google to crawl your page.

Ways to help Google crawl your page:


1: Utilize sitemap.xml to tell Google about pages on your website.

2: Increase the number of internal links to the article that you want Google to crawl. If the article is linked from your home page it is more likely that Google will find this article than if the article is linked from far deep within your site

3: Submit the page to Google for indexing. You can do this from the webmaster console. www.google.com/webmasters/tools/submit-url

After Google has crawled your page, it still may take some time for Google to index it in its search results. A lot of its indexing delay can be due to how much trust Google has in your site and your content. Newer and less trustworthy sites will take longer for Google to index its pages than trusted, established sites such as NYTimes which will often be indexed immediately.

You may also be experiencing a traffic boost two to four weeks into posting the article due to Google determining its ranking value based on the signals it has received. If your page has initially received a low amount of traffic, there aren't many signals for Google to determine how beneficial this page will be to its users. Once you have started to receive some traffic then Google can analyze metrics of the article such as "time on page", "bounce rate", "pages per session", etc, as well as its success on social network sharing. If your article performs well in its metrics analysis, then Google will begin sending you more traffic.

10% popularity Vote Up Vote Down


Report

 query : Re: SEO effects of a concise URL path vs a longer one with more keywords My sister wants to launch a celebrity website, which will have information such as age, height, weight etc. Suppose the

@Bryan171

Google does weigh a page based on keywords that are in the URL. And so longtail URLs can be very helpful in this regard.

But longtail URLs when structured well can also be very helpful to people who are searching for information.

Say for instance that a person searches for "What is Demi Lovato's weight?". If they were to see a search result in Google for example.com/Demi-Lovato , they might not realize that this page actually has her weight listed. Whereas a page with the URL example.com/Demi-Lovato/Age-Weight-And-Height clearly indicates that this page will have her weight listed on that page.

A primary goal of URL structuring is to indicate to Google what the page will be about, as well as to indicate to the visitor what the page will be about.

Cleaner URLs without the longtail keywords can and often do look better, but you should also be thinking about how this will impact your click through rate. Do you think that users searching for information about celebrities are more likely to click on the longtail URL or the shorttail URL?

I think the answer to the above question also has to do with your domain name. If your domain name clearly has celebrity information as keywords in its name, it's more appparent what your site is about. Where as if your domain name is unrelated to the topic that the user is searching for, you may want to indicate to them what the page will be about in a longtail URL.

Click through rate is a major ranking indicator for Google on its search results. And so I think you should think about what structure will have the best result for you in this way. In the end, it's largely a matter of choice. If you're trying to rank on Google, I think longtail URLs will have a bigger impact. If you already have your own user base and want to clean up the URLs for them to access your pages more easily, then perhaps you can go shortail. In the end, it's mostly a matter of choice.

10% popularity Vote Up Vote Down


Report

 query : Re: What needs to be done to user generated outbound links to prevent hurting SEO? I have a site that contains user/external application generated content (content from SMS messages). I want links

@Bryan171

If you want Google to index these SMS messages, definitely use rel="nofollow" on all links. If you don't, not only are you going to be passing all of your link juice to those links, you're also going to get penalized hard whenever those links are linking out to sites that are spammy, low or bad content, or malware.

If you're posting people's SMS messages onto your website and are trying to rank those messages with Google, please make sure that your users know that their messages are going to be made public on Google. It would be unethical for you to publicly display private messages online. At the same time, if your users know that their SMS messages are being made public, then that is a great tool that you're offering. An easy way for people to publish their original content.

10% popularity Vote Up Vote Down


Report

 query : Re: What needs to be done to user generated outbound links to prevent hurting SEO? I have a site that contains user/external application generated content (content from SMS messages). I want links

@Bryan171

Nobody could even imagine, for what topic could rank a page with content from user SMS messages.

But one thing is for sure - such content will never ever help somebody to solve any issue. Don't make a mistake to think, just because it is a content it should be indexed.

Set the whole page with SMS messages to noindex and all in- and outbound links to nofollow. Google doesn't excuse useless content in index.

10% popularity Vote Up Vote Down


Report

 query : Re: Which data is correct Google Trends or Google Keyword Planner Tool? I have searched for a keyword "MLM" in Google Trends (location - Benin) which shows me there is some rising searches about

@Bryan171

Both are correct. Those are different data.

Keyword Planner shows absolute numbers, while Google Trends shows normalized data of user behavior. Normalized means here in general in proportion to time and location

10% popularity Vote Up Vote Down


Report

 query : Re: SEO & Angular Universal (Angular 5) We are using Angular Universal to render our Angular 5 app. HTML5 is properly rendered by the NodeJS server, but Google does not render the page visually

@Bryan171

Angular is destined only for projects where SEO is definitely NOT in the scope of requirements. Any SEO question regarding Angular shows that is actually too late to think about SEO.

To your questions:


yes, it is harmful. It shows, that the HTML rendering fails massively.
I would say the rendering fail is caused not by component itself, but rather by its implementation. Looking into HTML source code of example under material.angular.io/components/tabs/examples i see completely different HTML markup of tabs, as "Fetch as Google" displays.

10% popularity Vote Up Vote Down


Report

 query : Are links in canvas/WebGL with a sitemap good enough for SEO, or do I need normal a href links too? The main page of my site is comprised of big graphics and animations implemented with Canvas/WebGL

@Bryan171

Posted in: #Seo #SinglePageApplication #Sitemap

The main page of my site is comprised of big graphics and animations implemented with Canvas/WebGL using pixi.js library. The problem is all links are also implemented as interaction with WebGL layer.

But then I know I can just list my links in sitemap.xml. Is listing the links in sitemap enough for good SEO in this situation?

Or should I somehow check something and insert normal a instead?

10% popularity Vote Up Vote Down


Report

 query : Is placing an image over H1 text an SEO issue? We have a main image on every page of a website. This image includes some text. We want to use text in image as h1 but to make it live

@Bryan171

Posted in: #Html #Images #Seo

We have a main image on every page of a website. This image includes some text.

We want to use text in image as h1 but to make it live text on image would be an issue because of responsiveness issues.

What if we place the h1 live text under the image (it would NOT be visible, using z-index etc.)?

Would this be an SEO issue and would Google not like this?

10.02% popularity Vote Up Vote Down


Report

 query : Re: Does Google care about cached versions of site having Stylesheet and JavaScript files? I would like to know if having Stylesheets and JavaScript files available for old, cached versions of a

@Bryan171

The only purpose of websites caches is for Google to provide a better user experience. They store the cache version of the web page in case that page becomes unavailable (web page is broken, server down, etc). Users can also use the cache in case the page is too slow to load.

When auditing, SEOs sometimes check the cache version of any web page to confirm that Google is caching the page properly or to check if the website has been hacked (with parts of the page not visible to any user but bots). After problems are fixed you won't need to worry at all about the cache. It has nothing to do with rankings, it just a reference for SEOs and (sort of) a backup for Google.

10% popularity Vote Up Vote Down


Report

 query : Re: Wordpress Layered Nav pages - Noindex I have been trying to figure out how to Noindex the pages created by Woocommerce widget - Latered Nav. I use YOAST and have used that to noindex other

@Bryan171

I would suggest going to Configuration > robots.txt > settings and check your screaming frog robots configuration and make sure "ignore robots.txt" is unchecked. Also Go to Configuration > HTTP Headers > User Agent, and select "GoogleBot Regular" as the preset user agent. This way you will have a better idea about how Googlebot will crawl your website.



Then, go to your website robots.txt file and make sure you add instructions as to how bots should crawl the pages of your website. For example, if you would like to block the provided URL add the following line:

.
.
.
Disallow: /*?*
Disallow: /whishlistpage
Disallow: /userloginpage


For example, the Disallow:/*?* will prevent Bots, including Googlebot, to crawl any page or URL that contains the “?” character. However the pages could still get indexed if you are linking any of those pages from another page. The most effective way to get those pages not to be indexed is by implementing the meta robots tag. Using the Yoast SEO plugin make sure you define those pages to be noindex.

You need to set the meta robots tag for all the filtered result pages and the areas of the website you wish to block, like user login page

<meta name=“robots” content=“noindex”>


Wait until Google crawl again your webpages with the meta robots tag and then, and only then, edit robots.txt file. This way they will know they do not need to crawl the pages again in the future.

Note: I would also like to point out that as long as the pages have the noindex (and sometimes nofollow) meta tag your pages won’t get indexed. Making your pages crawlable or not by screaming frog does not determine if the pages will be indexed, screaming frog is just reporting back to you and it has nothing to do with Google, Google will only take in this case just the noindex meta robots tag and will not index your pages accordantly. In other words, if screaming frog finds and report those URLs back to you does not mean that Googlebot won’t get the noindex meta robots tag instructions.

10% popularity Vote Up Vote Down


Report

 query : Re: Making a DNS setup for 20+ domains hosted on the same IP-address more manageable We've got about 20 localised domains for just one website, all of them point to the same IP address The current

@Bryan171

I hope that this answer helps you because it helped me. I have managed 20+ domain names on 5 static IP addresses. I use NOIP A Dynamic DNS and Managed DNS Provider I downloaded their Dynamic DNS Update Client (DUC) for Windows. Keeping my current 5 static IP addresses in sync with my No-IP host or domain with this Dynamic Update Client (DUC).

10% popularity Vote Up Vote Down


Report

 query : Re: www site address DNS issue Sorry if I leave anything out. I'm new to this side of web-development. I'm a developer who deals mainly with WordPress sites. I took a job working for a company

@Bryan171

onpacefs.com currently has CloudFlare nameservers (igor and becky) and wwwresolves to IP addresses 104.18.51.170, 104.18.50.170, 2400:cb00:2048:1::6812:32aa and 2400:cb00:2048:1::6812:33aa.

In my browser www.onpacefs.com and www.onpacefs.com both redirect to onpacefs.com/ which loads successfully.

onpacefs.com resolves to the same 4 IP addresses.

You have no DNS configuration errors: dnsviz.net/d/onpacefs.com/Wmj82w/dnssec/
So I fail to see your problem. You should give more details on what you see exactly (both on the DNS and HTTP level), what you have changed (from what value to what value) and what you expect.
(I do not know what you mean by "however the previous developer did some interesting things with the DNS on cloudflare")

10% popularity Vote Up Vote Down


Report

 query : Re: Do Child Categories have any influence on Parent Categories, when it comes to SEO/SERP? I am currently working on a WordPress e-commerce website, where the chosen shopping platform is WooCommerce.

@Bryan171

In short - no, parent categories don't have any influence on their child categories.
What does have themost and important influence - their internal linkage.

Example

you have categories nesting like


eye wear
reading glasses
sun glasses


There is no any difference, whether you have them like

example.com/eyewear/, www.example.com/eyewear/reading-glasses/, example.com/eyewear/sun-glasses/,

or do you have

example.com/eyewear/ www.example.com/sun-glasses/ example.com/reading-glasses/

But it is very (if not mostly) important, how they are interlinked. What means interlinking:
example.com/eyewear/ should have links to:


most popular reading glasses (links to products),
links to category /reading-glasses/
most popular sun glasses (links to products),
links to category /sun-glasses/

example.com/sun-glasses/ should have links to:


most popular reading glasses (other users bought → links to products),
link to category /eyewear/ (breadcrumb)
link to category /reading-glasses/ (list of another categories)


Got the idea?

10% popularity Vote Up Vote Down


Report

 query : Google doesn't recognize a 404 status code There are 404 pages with two kinds of response headers (copypasted in full length from Chrome DevTools, Network tab): Response headers: cache-control:max-age=0,

@Bryan171

Posted in: #Google #Googlebot #GoogleSearchConsole

There are 404 pages with two kinds of response headers (copypasted in full length from Chrome DevTools, Network tab):


Response headers:

cache-control:max-age=0, no-store
content-type:text/html
date:Wed, 24 Jan 2018 10:55:59 GMT
server:Apache/2.4.29 (Ubuntu)
status:404
x-powered-by:PHP/5.5.9-1ubuntu4.22
Response headers

cache-control:max-age=0, no-store
cache-control:no-cache, max-age=0, must-revalidate
content-type:text/html; charset="utf-8"
date:Wed, 24 Jan 2018 10:55:40 GMT
expires:Thu, 19 Nov 1981 08:52:00 GMT
pragma:no-cache
server:Apache/2.4.29 (Ubuntu)
set-cookie:bypassStaticCache=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/; httponly
set-cookie:bypassStaticCache=deleted; expires=Thu, 01-Jan-1970 00:00:01 GMT; Max-Age=0; path=/; httponly
status:404
x-powered-by:PHP/5.5.9-1ubuntu4.22


The pages with the first kind of response headers aren't recognized by Google as 404. Instead of 404 alerts the Search Console those pages as duplicate pages without canonical tag.

The pages with the second kind of response headers are correctly recognized as 404.

Not recognized means: Google calls such pages Duplicate page without canonical tag, despite the fact, that developer tools of Firefox and Chrome get correct 404 status code.

Recognized means: such pages are called Not found (404), like on the following screenshot:


Why is it so? What prevents the correct status code recognition? Does the answer need additional information? Just say - i'll try to provide it.

PS: maybe it is a bug of the new Search Console...? @JohnMu

10.01% popularity Vote Up Vote Down


Report

 query : Re: Mutlitple s in a webpage I have a site which is divided into templates and sub-items for tempates in the backend's template language. However the sub-templates which are embedded into main

@Bryan171

Applying multiple elements <!DOCTYPE html> will throw an error in the HTML5 validator. Any error on the web page has a direct or indirect effect on the search engine optimization of these webpages. In your case.


Parsing is a very significant process within the rendering engine.
Parsing a document means translating it to a structure the code can
use. The result of parsing is usually a tree of nodes that represent
the structure of the document. This is called a parse tree or a syntax
tree. Parsing is based on the syntax rules the document obeys: the
language or format it was written in. Every format you can parse must
have deterministic grammar consisting of vocabulary and syntax rules.

Parsing can be separated into two sub processes: lexical analysis and
syntax analysis.

Lexical analysis is the process of breaking the input into tokens.
Tokens are the language vocabulary: the collection of valid building
blocks. In human language it will consist of all the words that appear
in the dictionary for that language.

Syntax analysis is the applying of the language syntax rules.

Parsers usually divide the work between two components: the lexer
(sometimes called tokenizer) that is responsible for breaking the
input into valid tokens, and the parser that is responsible for
constructing the parse tree by analyzing the document structure
according to the language syntax rules.

The parsing process is iterative. The parser will usually ask the
lexer for a new token and try to match the token with one of the
syntax rules. If a rule is matched, a node corresponding to the token
will be added to the parse tree and the parser will ask for another
token.

If no rule matches, the parser will store the token internally, and
keep asking for tokens until a rule matching all the internally stored
tokens is found. If no rule is found then the parser will raise an
exception. This means the document was not valid and contained syntax
errors.


Source: How Browsers Work: Behind the scenes of modern web browsers of HTML5 Rock.

You can see how much work the browser does when opening a webpage. This parsing takes some time for the browser. Now imagine that in the source code of open web pages have errors. Accordingly, the browser parsing time is increased. Accordingly, the speed of downloading the webpage decreases, but this is a signal of Google's search ranking - both for desktops and for mobile. Thus, breaking the standard of HTML (source code errors - this is a violation of the accepted standard) you are contributing a decrease in the search rank of your webpages, that is, a negative SEO.

A possible solution to this problem is the use of a tag code.

10% popularity Vote Up Vote Down


Report

 query : How do I successfully install a 'Let's Encrypt Certificate' onto my VPS and Emails? I currently run a Plesk VPS, which host 2 domains. I have installed a Let's Encrypt Certificate onto the

@Bryan171

Posted in: #LetsEncrypt #Openssl #Plesk #SecurityCertificate #Vps

I currently run a Plesk VPS, which host 2 domains. I have installed a Let's Encrypt Certificate onto the VPS as well as within both of the domains.

Upon installation, the 'Secure' notification and valid Let's Encrypt Certification appear in the browser for both the VPS and the 2 domains; which is exactly what I want.

Unfortunately, I have now come into a problem with the emails. I can send and receive emails perfectly fine. This is the case for when I directly log in via Webmail and when I use an email client such as Outlook 2016.

That said, I seem to encounter a problem with the email certificate. When I go to add an email account to Outlook 2016, I encounter the following Internet Security Warning notification within Outlook 2016:



When I view the certificate, I am taken to the following window:



As you can see the certificate displays 'vps123456.123-vps.co.uk' (Not the real name), which is the name of the Plesk Domain and is inserted into the Let's Encrypt Certificate as displayed below:




I accessed to above by going to: Plesk Control Panel > Tools & Settings > SSL/TLS Certificates > Let's Encrypt.

Could the issue be that I am trying to apply the Server certificate to the domain email rather than having a separate Mail Domain Server certificate? I have already created 2 domains, where I have successfully uploaded the certificate.

If this is the case, would I need to create a certificate for 'mail.domain.tld'? If this is the case, what would be the steps for this? I have had conflicting steps across various forums. Would I need to create a Sub Domain for the emails? Or change my Hostname etc?

Through looking at many articles, over the past few days, I have completely confused myself over the application of the SSL Certificate by Let's Encrypt.

Any direction, on this matter, would be greatly appreciated.

10% popularity Vote Up Vote Down


Report

 query : What entry should be put into the Domain Name, when it comes to SSL Certificates for a Plesk VPS? I am currently in the process of securing a VPS of mine, with the Let's Encrypt SSL Certificate.

@Bryan171

Posted in: #Https #Plesk #SecurityCertificate #Vps

I am currently in the process of securing a VPS of mine, with the Let's Encrypt SSL Certificate.

Referring to the below image, was is required, when 'The domain name must resolve to your server'? Should I enter one of the domains that are on the VPS, in the format of domain.com or should I enter the VPS IP Address?

10.01% popularity Vote Up Vote Down


Report

 query : Recommended approach for slug generation with multiple languages I am building a website that will be available in multiple countries. Each country's content will be unique to that country, but

@Bryan171

Posted in: #Hreflang #Language #Localization #Seo #UrlSlug

I am building a website that will be available in multiple countries. Each country's content will be unique to that country, but each country's content will be offered in multiple languages, using the hreflang mechanism.

The URL structure of the website will be, for example:

UK
mydomain.com/gb-en- English (default)
mydomain.com/gb-de - German

Germany
mydomain.com/de-de - German (default)
mydomain.com/de-en - English


As stated, both the UK and Germany websites will have unique content.

My question is, when thinking about SEO and the keywords within the URL, should the slugs be localised for each language on each website, or should the slugs only reflect the default language?

So, for example should I have:

mydomain.com/gb-en/my-seo-friendly-blog-post - English (default)
mydomain.com/gb-de/my-seo-friendly-blog-post - German


Or should I have:

mydomain.com/gb-en/my-seo-friendly-blog-post - English (default)
mydomain.com/gb-de/mein-seo-freundlicher-blogeintrag - German

10.01% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme