logo topseosmo.com

@Alves908

Alves908

Last seen: Tue 11 May, 2021

Signature:

Recent posts

 query : Re: Do you know why i got different statistics from FB pixel and google analytics? Do you know why I got different statistics from FB pixel and google analytics? In facebook pixel, I have 53 link

@Alves908

There could be multiple reasons why Facebook and Google Analytics register different data but the most important is that Facebook is based on clicks and Analytics on sessions.

If you are not using the same metric, data can't be the same.

Chech Facebook's official documentation in order to minimize the gap between both apps.

10% popularity Vote Up Vote Down


Report

 query : Re: Impact of subdomain urls on SEO I have an application which is hosted on a subdomain URL of my client's primary domain. Below is the example of how my primary domain is and how the subdomain

@Alves908

Each page of your site should have a unique objective for ranking in searchers. Having that on mind, a unique title and content are needed in order to achieve that goal.

If you are ranking for the same keywords with the main domain and the subdomain you are doing something wrong (from an SEO point of view) cause, as you said, you are competing with yourself.

To prevent that situation:


Make sure each page has unique title and content.
Use internal linking to let searches know which are the "priority pages" of your site (the ones you want to rank for).
Use structured data to let searchers know more about the purpose of each page.

10% popularity Vote Up Vote Down


Report

 query : Re: Why does Google index two copies of a website, one on a development URL with the development site ranking better? I only found one question similar to mine, but it is for Yahoo search. Mine

@Alves908

What's the point on having the same web in two different domains? You are generating duplicate content. While you are working on a site, you should prevent it from being indexed.

Your site is ranking higher cause Google discover it first. You should do one of the following:


Use a 301 redirect from your domain to your client's domain.
Use the canonical tag

10% popularity Vote Up Vote Down


Report

 query : Facebook crawler with no user agent spamming our site in possible DoS attack Crawlers registered to Facebook (ipv6 ending in :face:b00c::1) were slamming our site, seeing 10s of thousands of hits

@Alves908

Posted in: #Cdn #Cloudflare #Ddos #Facebook #WebCrawlers

Crawlers registered to Facebook (ipv6 ending in :face:b00c::1) were slamming our site, seeing 10s of thousands of hits in just 20 minutes. We noticed they didn't have a user agent in the header and implemented a rule on cloudflare to protect ourselves.

It appears they've patched the crawler and added a user agent 'Externalhit/1.1' which is a recognised crawler. Now they're circumventing the rule, I'm seeing 11,000 hits in 15 minutes. Often multiple times to the same page! This is crippling our database. It's prevent customers from legitimately using the site.

We've implemented a broad block on all of Facebook's IPs in order to try and remedy this but we've likely already lost business because of it.

My question is: Has anyone seen this before? Any idea what's causing it? Is there a channel for getting a response from Facebook or is there a legal route we should go?

Link to our tweet: twitter.com/TicketSource/status/969148062290599937 Tried FB developers group, and Facebook rep and were directed to Support. Filed a ticket, no response.

Log sample:

2018-03-01 09:00:33 10.0.1.175 GET /dylanthomas - 443 - facebookexternalhit/1.1 - 200 0 0 5394 2a03:2880:30:7fcf:face:b00c:0:8000
2018-03-01 09:00:33 10.0.1.175 GET /dylanthomas - 443 - facebookexternalhit/1.1 - 200 0 0 5362 2a03:2880:30:afd1:face:b00c:0:8000
2018-03-01 09:00:33 10.0.1.175 GET /dylanthomas - 443 - facebookexternalhit/1.1 - 200 0 0 5378 2a03:2880:30:7fcf:face:b00c:0:8000
2018-03-01 09:00:33 10.0.1.175 GET /dylanthomas - 443 - facebookexternalhit/1.1 - 200 0 0 5425 2a03:2880:30:2fea:face:b00c:0:8000
2018-03-01 09:00:33 10.0.1.175 GET /dylanthomas - 443 - facebookexternalhit/1.1 - 200 0 0 5394 2a03:2880:30:2fea:face:b00c:0:8000
2018-03-01 09:00:33 10.0.1.175 GET /dylanthomas - 443 - facebookexternalhit/1.1 - 200 0 0 5659 2a03:2880:30:2fd8:face:b00c:0:8000
2018-03-01 09:00:33 10.0.1.175 GET /dylanthomas - 443 - facebookexternalhit/1.1 - 200 0 0 5659 2a03:2880:11:dff3:face:b00c:0:8000
2018-03-01 09:00:36 10.0.1.175 GET /whitedreamspremiere - 443 - facebookexternalhit/1.1 - 200 0 0 5048 2a03:2880:2020:bffb:face:b00c:0:8000
2018-03-01 09:00:36 10.0.1.175 GET /helioscollective - 443 - facebookexternalhit/1.1 - 200 0 0 4633 2a03:2880:3020:1ffd:face:b00c:0:8000
2018-03-01 09:00:36 10.0.1.175 GET /helioscollective - 443 - facebookexternalhit/1.1 - 200 0 0 4727 2a03:2880:3011:afc5:face:b00c:0:8000
2018-03-01 09:00:36 10.0.1.175 GET /helioscollective - 443 - facebookexternalhit/1.1 - 200 0 0 4977 2a03:2880:3020:1ffd:face:b00c:0:8000
2018-03-01 09:00:36 10.0.1.175 GET /event/FDMEJD - 443 - facebookexternalhit/1.1 - 200 0 0 4868 2a03:2880:2111:1ff9:face:b00c:0:8000


Edit2: These IPs are crawling as we've found URLs from our payment process. So they followed a link and ended up in a session only URL.

Edit3: Facebook have updated the bug report developers.facebook.com/bugs/1894024420610804

10.02% popularity Vote Up Vote Down


Report

 query : Re: Calculating Session Duration for Single Page Website (Google Analytics) I'm trying to set up some remarketing lists in Adwords and I'd like to target people who've been on my site for over 15

@Alves908

Yes, custom events would solve the problem.


Please, check Non-Interaction Events in order to understand that you can create events that do not affect time or bounce.


In this case, I will use the following:


Event on scroll (more than 50% of the site).
Event on time (depends on the video. You can use setTimeOut).
Event on video player (play/stop with Non-Interaction, just to know how they use it).

10% popularity Vote Up Vote Down


Report

 query : Re: How can I make the canonical tag different on every page when it is included in my common header section? I have a photo sharing site. I inserted the canonical tag on header ( i have common

@Alves908

Edit: if you want to use canonical on every page, you can do it with Google Tag Manager.


There is no point on using canonical tag among all the site. You should only use it on pages with duplicate content.

For example, if you have example.com and example.com/index.php you should add the following tag in the page header of index.php.

<link rel="canonical" href="http://example.com" />


Searchers will discard example.com/index.php version. If you have example.com/products and example.com/poducts/sorted you should add at both pages:

<link rel="canonical" href="http://example.com/products" />

10% popularity Vote Up Vote Down


Report

 query : Re: How can the index.html file be served from a sub directory of the DocumentRoot without using system level links? How can the index.html file be served from a sub directory of the DocumentRoot

@Alves908

You may want the mod_rewrite solution that @Stephen suggests, however, also consider DirectoryIndex (depending on your requirements).

The DirectoryIndex document tells Apache which file to serve when you request a directory. This often defaults to index.html in the directory being requested, however, you can specify any file, anywhere. For example:

DirectoryIndex /html/index.html


This instructs Apache to serve the root-relative /html/index.html file when requesting any directory (eg. the document root, as in your case). If you wanted to serve index.html in the requested directory when it exists, but /html/index.html otherwise, then you can include both, in the order that they should be tested, for example:

DirectoryIndex index.html /html/index.html




Reference: httpd.apache.org/docs/2.4/mod/mod_dir.html#directoryindex

10% popularity Vote Up Vote Down


Report

 query : Re: How could I forcibly output the default 403-forbidden error page when http://example.com/not_found is requested? This is my first question on StackExchange so bear with me, here is a brief breakdown

@Alves908

RewriteCond %{REQUEST_URI} ^(https{0,1}://)(www.){0,1}(eksempelhjemmeside.dk|eksempelhjemmeside.de){1}(/not_found){1}$
RewriteRule . - [R=403,NC]



You would seem to be way overcomplicating things here. The REQUEST_URI server variable contains the URL-path only, ie. /not-found. And this is all you appear to be concerned about. You don't need to check for the domain, www/non-www, HTTP/HTTPS, since you seem to want to match all variations anyway. Unless there are some other subdomains etc. that you don't want to catch? (But in that case, it might be simpler to match the exceptions, rather than the target?)

However, as @closetnoc suggested in comments, I would still return a simple "404 Not Found" for these requests - since that is what they are. By "simple", I mean the default Apache 404, so you fail early, rather than passing the request through WordPress, which is what is currently happening. So, all you need is something like the following before the WordPress front-controller:

# BLOCK /not_found
RewriteRule ^not_found$ - [R=404]


No need for the <IfModule> wrapper, RewriteEngine directive (since that is already included in the WP block), or RewriteCond directive. When you specify a status code other than 3xx then you don't need the L (last) flag either (it is implied).

You might want to include the NC (nocase) flag, if you are getting requests for /NoT_FounD and/or /NOT_FOUND etc., but otherwise this should be omitted.

Note that in .htaccess, the URL-path matched with the RewriteRule pattern, does not include the slash prefix. So, the pattern ^not_found$ matches just the URL /not_found.

As an academic excercise, if you did want to return a "403 Forbidden" instead of a 404, then you would change the above directive to read:

RewriteRule ^not_found$ - [F]


Again, no need for the L flag when using the F flag. Alternatively, you can write R=403 instead, but F is the preferred shortcut.

10% popularity Vote Up Vote Down


Report

 query : Re: Backlink To Post Ratio Do backlinks to post ratio play a role in rankings? Example I have 100+ posts, (increasing each day) I only have 7 backlinks, however, those backlinks are from very high

@Alves908

If you publish quality content that values more in Google point of view. Don't create quantity of backlinks but quality of links that give boost to your website and increase search ranking also.

10% popularity Vote Up Vote Down


Report

 query : First Byte Time is slow after SSL installation This is the first time I try to install SSL for one of my clients but I can't figure out what I am doing wrong. Here is without SSL: https://www.webpagetest.org/result/18

@Alves908

Posted in: #Https

This is the first time I try to install SSL for one of my clients but I can't figure out what I am doing wrong.

Here is without SSL: www.webpagetest.org/result/180208_2W_13870833ff7ddff2762c99b657acd43c/1/details/#waterfall_view_step1 and here is with SSL: www.webpagetest.org/result/180212_E1_8a48e92b57b4f494ba988fcfcc36425f/2/details/#waterfall_view_step1
As you can see the First Byte Time went from A to F and takes 1-2 seconds to deal with it. Pingdom shows similar results.

Any suggestions? I've tried getting a new SSL certificate didn't help. OpenSSL, etc are up-to-date and the server is on Ubuntu 14.04 with apache 2.4.7

10% popularity Vote Up Vote Down


Report

 query : Google Adwords: issue with tracking model and valueurl Sorry for screenshot, I'm italian and I have Gadwords in Italian. I'm tryng to prepare a campaign. I cannot absolutely insert the tracking

@Alves908

Posted in: #GoogleAdwords

Sorry for screenshot, I'm italian and I have Gadwords in Italian.

I'm tryng to prepare a campaign.

I cannot absolutely insert the tracking model. I did insert the final url (first one) but every time I click on "test", I get the red alert.

Other this, I would add a parameter in the URL (e.g., see the {_a} with value "fr".

But when I click on the preview of ads (in my adwords console), Google Adwords link to my website without the params.

Where am I wrong? Thank you very much

10% popularity Vote Up Vote Down


Report

 query : Re: "Invalid command 'php_value'" in .htaccess for WordPress site after switching hosts I moved my site which was on 00webhost to a new host. I set everything up but a 500 internal error shows

@Alves908

It's quite probable you've moved from a host where PHP was installed as an Apache module and is now configured as CGI/FastCGI. Instead of using php_value in .htaccess, you'll need to set these values in a .user.ini file instead. The format is the same as php.ini. (Some hosts also allow a local php.ini as well.)

The .user.ini file (note the dot prefix) goes in the document root of your site, just like .htaccess, but contains just the settings that relate to PHP.

For example, if you previously had something like the following in .htaccess:

# Include a PHP file on every request
php_value auto_prepend_file /home/dotcomwo/public_html/includes/config.inc.php


Then you would write this like the following in a .user.ini file:

; Include a PHP file on every request
auto_prepend_file="/home/dotcomwo/public_html/includes/config.inc.php"

10% popularity Vote Up Vote Down


Report

 query : Re: Is Checking For mod_write Really Necessary? Recently I noticed that many people post .htaccess files here with: <IfModule mod_rewrite.c> Sometimes this even appears several times in the file!

@Alves908

No, most of the time, checking for mod_rewrite is not necessary. In fact, it is often preferable to remove this check.

If the mod_rewrite directives are required by your site then you should not wrap them in a <IfModule mod_rewrite.c> container. Because if they are required and mod_rewrite is not available then the directives simply fail silently and your site continues to break in some other way (masking the underlying cause) and possibly exposes something you weren't expecting. Without the <IfModule> wrapper then the site would break instantly (and completely) with an easily identified error and nothing unexpected is exposed.

The only times when the <IfModule mod_rewrite.c> should be used is either:


The site is designed to work with or without mod_rewrite. This is the case with WordPress. Without mod_rewrite the site still "works", you just don't get the "pretty" URLs.


Or


You have directives from another module that are dependent on mod_rewrite having executed successfully. So, in this case you would wrap these other directives in a <IfModule mod_rewrite.c> wrapper. For example, setting an HTTP response header (with mod_headers) based on some property of the request that you have determined using mod_rewrite. In this case you might wrap the Headers directive in a <IfModule mod_rewrite.c> container.


Most of the time, if you know your server, then you don't need the <IfModule mod_rewrite.c> check - since you already know whether mod_rewrite is enabled or not. The only time when you do need it is if you are writing portable code to work on multiple servers and either condition #1 or #2 above are met.


Sometimes this even appears several times in the file!


And most of the time this is completely unnecessary. However, in defence of this behaviour, this often occurs when you have different plugins that edit .htaccess automatically and independently. The same is true for multiple RewriteEngine and RewriteBase directives.

For hand-written code you should never see this. For hand-written code this generally occurs through mindless copy/paste (which unfortunately seems to happen a lot with .htaccess directives).


Is there an advantage in having the <IfModule mod_rewrite.c> fail rather than the RewriteRule statements below it?


Only in the case of #1 or #2 above. Most of the time, no.


mod_write now is so essential and ubiquitous that I do not check for it anymore. Might this cause a vulnerability of some kind?


If it's essential for your site then there is no need to check for it. No vulnerability.

In fact, the opposite could even be true... if the mod_rewrite directives are essential then checking for the presence of mod_rewrite could even cause you more problems if mod_rewrite suddenly become unavailable for whatever reason. As mentioned above, your mod_rewrite directives would now silently fail (stress "silent" - no error), but the website may still continue to function without a server error being triggered but returning nonsense to the user (and search engine bots) with a 200 OK status. If the <IfModule> wrapper had been omitted then you would have been notified immediately of the problem. However, if this error was silenced it may be some time before the problem is discovered, by which time more serious damage may have already been done.

10% popularity Vote Up Vote Down


Report

 query : Re: How to 301 redirect back to a pretty link without creating an infinite loop in .htaccess I am doing a rewrite from .htaccess file to redirect pretty links to original URLs: RewriteRule ^users/?$

@Alves908

RewriteCond %{QUERY_STRING} (^|&)status=users($|&)
RewriteRule ^$ /users? [L,R=301]



You just need an additional condition (ie. RewriteCond directive) on your external redirect that detects whether it is a direct/initial request from the user, as opposed to a rewritten request by your other rewrite directive.

One way to do this is to check the REDIRECT_STATUS environment variable. This is not set on the initial request, but is set to "200" (as in 200 OK) after the first successful rewrite. So, you can check to see whether REDIRECT_STATUS is empty in order to detect the direct request and prevent a redirect loop.

For example:

RewriteCond %{ENV:REDIRECT_STATUS} ^$
RewriteCond %{QUERY_STRING} (^|&)status=users($|&)
RewriteRule ^$ /users? [L,R=301]




Another way is to check against THE_REQUEST server variable, which contains the first line of the initial request and does not change when the URL is rewritten. So, when requesting example.com/?status=users directly, THE_REQUEST would contain a string of the form:

GET /?status=users HTTP/1.1


This would allow you to have just one condition, as opposed to two as in the above, at the expense of perhaps a more complex regex. (Implementation is left as an exercise for the reader.)

10% popularity Vote Up Vote Down


Report

 query : Anyone know a simple CMS that just manages links? I'm putting together a writer's portfolio, and I need to be able to easily add/edit a list of external links to two pages of my choosing.

@Alves908

Posted in: #Cms #Links

I'm putting together a writer's portfolio, and I need to be able to easily add/edit a list of external links to two pages of my choosing. I also need to be able to show as many links as I want at a time, and preferably separate them into groups. PHP is preferred.

Every CMS I can find is either geared toward very specific types of content (pages and blog posts), or is otherwise just completely overkill. Thoughts?

10% popularity Vote Up Vote Down


Report

 query : Re: .htaccess repeatedly asks for username/password I'm trying to add password protection to my personal website using .htpasswd and .htaccess. When I enter the username and password, I get asked again

@Alves908

AuthUserFile /.htpasswd


Both files are in the root directory.


The AuthUserFile directive takes an absolute filesystem path (or a path that is relative to the server root), not a path relative to the document root - which is what /.htpasswd looks like? It is unlikely that your "root directory" (by which I assume you mean your website's DocumentRoot) is in the root of your server (if it is then you should probably change it).

You'd expect this to be something like:

AuthUserFile /home/user/secure/.htpasswd

10% popularity Vote Up Vote Down


Report

 query : Low Bounce Rate, Low Pages per Session I have some strange data I am trying to figure out. For most of 2017, the bounce rate for my site has been around 85%. The time on page was about

@Alves908

Posted in: #GoogleAnalytics

I have some strange data I am trying to figure out.

For most of 2017, the bounce rate for my site has been around 85%. The time on page was about 2:00 minutes. Pages per session were a little over 1.

I added a related posts widget to expose some relevant posts for visitors also read. Since that time, the bounce rate has dropped to around 20%. The average time on page has gone up to a little under 5:00 minutes. Yet the pages per session is the same, a little over 1.

Any one have some idea what is going on?

10.01% popularity Vote Up Vote Down


Report

 query : Re: Do ad-impressions count if the user is using an adblocker? Companies serving advertisements have a few different ways of billing you, with the most common being CPI (Cost Per Impression) and

@Alves908

Google calculates Impression only when an ad call is made from the browser after reading your webpage code. When an Ad Blocker is switched on with the filter to block ad server or domain. Then browser will ignore reading the tags, hence there will be no call made to Ad server. So it won't count as an impression as non request is being made and hence you won't be charged.

10% popularity Vote Up Vote Down


Report

 query : Does adobe analytics decrease website performance? Yes. Adding anything adds to page load time. Images, extra CSS, everything. Analytics typically use JavaScript, so in addition to the extra

@Alves908

Does adobe analytics decrease website performance?


Yes. Adding anything adds to page load time. Images, extra CSS, everything.

Analytics typically use JavaScript, so in addition to the extra load time for the JS file, there are further delays while the script executes. That means adding it will be more noticeable than adding a static resource such as an appropriately scaled image, for example.

Use the network profiling tools built in to your browser to evaluate load time, or try a third-party testing service such as GTmetrix, Sucuri, or keycdn. These will tell you exactly how much slower your pages are with the extra analytics bloat.

Don't forget that with the rapidly increasing adoption of browser ad blockers, your analytics scripts will be blocked by more and more users. Using analytics scripts will still add some value, but comparing results against more reliable figures such as server logs will become necessary to see how many visitors are present.

10% popularity Vote Up Vote Down


Report

 query : Re: Will search engines react to querystring ids as different pages? I've implemented my website to be like domain/objects?id=xxx like domain/objects?id=bikeId What I am worrying is the way that search

@Alves908

but will they show it as different pages?


Yes, they are different URLs, therefore (unless otherwise stated) they will be seen and indexed as different pages in the SERPs.

If you wanted search engines to see these as the same URL then you would need to explicitly declare the canonical URL in the HTML and/or explicitly instruct Google to ignore the URL parameter in Google Search Console.


Is it suggested to implement a custom routing...


You do not need to do that for search engines. That URL structure is primarily to benefit users, not search engines.

10% popularity Vote Up Vote Down


Report

 query : Google Search, First Match wrong part of my website when I search for the full name of my website, the first match Google returns is not the index but another page of it. Is it possible

@Alves908

Posted in: #GoogleSearch #WebDevelopment #WebHosting

when I search for the full name of my website, the first match Google returns is not the index but another page of it. Is it possible to change this behavior. I researched a bit but I'm not sure, which way to go. Will a sitemap change the behavior? Or should I change/add a robots.txt?

Any tips and suggestions welcome.

10% popularity Vote Up Vote Down


Report

 query : Google Analytics seems to be affected after setting preferred domain I hope you can assist me with this! I've recently changed the preferred domain for my site https://example.com. Unfortunately,

@Alves908

Posted in: #GoogleAnalytics

I hope you can assist me with this! I've recently changed the preferred domain for my site example.com. Unfortunately, after doing so, it appears that views/visits are no longer being recorded by Google Analytics. I've edited the URL in GA's Admin settings to indicate that it is now example.com - perhaps it just needs some time to settle in?

Also, I'm not quite sure if I've set up the 301 redirect correctly after setting my preferred domain.

RewriteEngine On
RewriteCond %{HTTP_HOST} ^www.YourDomain.com$ [NC]
RewriteRule ^(.*)$ YourDomain.com/ [R=301,L]


If I have an HTTPS site, do I need to add the 's' to the sections that contain 'HTTP'?

10.01% popularity Vote Up Vote Down


Report

 query : Re: Verify 2 cookies with mod_rewrite before serving images I have the following mod_rewrite rule, which works fine in my Apache 2.x on CentOS 6 Linux machine, but it is not complete: RewriteCond

@Alves908

but I'm not sure, if the above RewriteCond's act as X and (Y or Z) or (X and Y) or Z


In the directives you posted it is the former: X and (Y or Z)

However, as mentioned in my comment above, it is more efficient to do the URL-path check in the RewriteRule pattern - since this is what's processed first. This avoids the RewriteRule being processed for every request (as is what happens when using a catch-all pattern like .*). You then have just two ORd conditions that check the absence of either cookie (in any order). For example:

RewriteEngine On
RewriteCond %{HTTP_COOKIE} !(^|;s*)id=[0-9]+ [OR]
RewriteCond %{HTTP_COOKIE} !(^|;s*)auth=[0-9a-fA-F]{32}
RewriteRule ^/sites/default/files/pictures/picture- /images/dummy.png [L]


The (^|;s*) pattern prefix before the cookie name is just to safeguard against the situation when you have other cookies with a similar (but longer) name. eg. uid or userauth, etc. If that is not possible then this subpattern could be omitted.

There is no need to check for (;s*) at the end of the cookie value, as in @quanta 's answer, since this is not part of the value you are trying to validate. And the Cookie: header is not expected to end with a ; anyway - so this may not even match.

10% popularity Vote Up Vote Down


Report

 query : Can I get pageviews that don't have a value for a custom dimension in Google Analytics? I'm trying to get an overview of pageviews that don't have a value for my custom dimension. The purpose

@Alves908

Posted in: #GoogleAnalytics

I'm trying to get an overview of pageviews that don't have a value for my custom dimension.

The purpose is to figure out why only 70% of users actually seem to get a value assigned to this dimension, because I'm trying to get it as close to 100% as possible.

Here's a screenshot of a config I've tried so far:

10.01% popularity Vote Up Vote Down


Report

 query : Re: Ranking for alternate ordering of same keywords SEO I'm currently performing SEO for a childcare centre in a local area and have come across my first issue associated with ordering. It seems

@Alves908

First, search is not about keywords, it is about whole language.

Your results are exactly what I would expect.

When a search query is short, you are tying the hands of the search engine. It can be near impossible for the search engine to figure out what you want. Search engines do not match keywords. It matches intent. With the search childcare Sydney the assumption made is childcare in Sydney which follows a common semantic locale search. Reversing the order as Sydney childcare does not make the same assumption and relies more on the semantic meaning you have given which virtually none at all.

I have written on this exactly here. For SEO should you primarily optimize for "<city> <service>" or "<service> in <city>"?

You have to understand how semantics works with subject, predicate, and object. You also have to understand how fact links work and why in is assumed in locale searches. I will not repeat it all again, I instead invite you to read the linked answer for a full understanding, however you will easily see how one search works as expected and the other does not.

I will add one more thing. People naturally type their search queries exactly how the think. For most of us, and especially for English speakers, we think in terms of broad to specific. For example, childcare is broad and Sydney is specific. We also think what where. What? Childcare. Where? Sydney. In other words, your first query is natural. Semantics is based on many things including understanding text by analyzing what is written along with how people communicate including the smaller details I have included here. It is not just a science about the written word, but also a science that includes the functioning of the brain. Semantics has all this hard coded into it.

In short, there is nothing to fix except expectations.

10% popularity Vote Up Vote Down


Report

 query : Re: How to stop access of a PHP file from other sites I read somewhere about hot linking of images. Preventing image hot linking helps to stop bandwidth theft from your site. Would it work for

@Alves908

this example is for image even you can do same for any file type
You could make that folder not accessible from the web (e.g. place the folder outside htdocs or add .htaccess rules ).

Create a PHP script which handles all requests to the private images. This script would have to do the following:

-check if the user is authenticated
-check if the user is authorized to view the requested image open the image and print it to the browser
(you need to set correct http headers to make sure the content is treated as an image)

Demo

getimage.php

if (LoggedInUserCanAccessThisFile())//this is optional user define function as requirement if you want that only login user can see image then with the help of your session variables or cookies you can return this function true or false
{
$file = 'privatedir/image.jpg';
$type = 'image/jpeg';
header('Content-Type:'.$type);
header('Content-Length: ' . filesize($file));
readfile($file);
exit();
}


home.php/otherpage.php

<img src="getimage.php" />


(you can use src="getimage.php?userid=123" and get into getimage.php and check is this user logged in or not for showing image)

(also you can use src="getimage.php?userid=123&imgfilename=image3.jpg" for dynamic images code and get into getimage.php as

$file = 'privatedir/'.$_GET["imgfilename"];


)

10% popularity Vote Up Vote Down


Report

 query : Countering a DMCA removal from Google Search for a root category page A category page got a Notice of DMCA removal from Google Search, submitted through Lumen Databases, for a copyright infringing

@Alves908

Posted in: #Dmca #Google #Legal

A category page got a Notice of DMCA removal from Google Search, submitted through Lumen Databases, for a copyright infringing thumbnail. We have noticed this is becoming a pattern and category root pages are being targeted more often by DMCA agents.

The site in question has user sumbitted content and category pages consists of links with thumbnails pointing to about 50 posts.

Now the takedown was requested on the image file, post url and the category (root) page. Which means now the category (root) page has been omitted from Google search results.
example.com/images/post-1.jpg https://example.com/funny/post-1.html example.com/funny.html
The situation seems absurd, the equivalent of this could be youtube homepage being omitted from Google search because there's a copyright infringing thumbnail that made it into popular videos.

Needless to say the infriging material has been removed.

What can we do? What should be included in the counter notification for a successful reinstatement?

10.01% popularity Vote Up Vote Down


Report

 query : Tag property og:type not recognized by the Structured Data Tool I am currently adding some Schema.org properties on a website and in order to validate my work, I am parsing all pages in Google’s

@Alves908

Posted in: #GoogleRichSnippetsTool #OpenGraphProtocol

I am currently adding some Schema.org properties on a website and in order to validate my work, I am parsing all pages in Google’s Structured Data Testing Tool. Everything is fine except for one thing, it displays for every page a message saying Unspecified Type even though it mentions 0 errors and 0 warnings!



By clicking on it, you can see where the error is located in the document, and it shows the OG tags I have filled. It displays the type «og:website» but still declares the type is not recognized.

You can check this line is present and read by the tool:

<meta property="og:type" content="website" />


When I check the pages with other tools specifically related to OG tags, I have no warning at all.

I thought maybe a wild invisible character prevents the parsing of this data, but it does see the value …!

What is wrong in my code? Should I worry about this when this tool is not made to specifically validate OG tags?

ref: the page in the Testing Tool

10.01% popularity Vote Up Vote Down


Report

Back to top | Use Dark Theme