So I have ga.js code tracking orders in my web application. These are then picked up in the Conversions-->Tracking-->Transactions section in Google Analytics. The thing is I'm getting average 80-90% of my orders showing up in GA. I've read up online that it is normal that a small percentage of orders wont show up in GA. Is this a correct assumption to make?
Yes it is correct because many users block monitoring sites like that one.
The information is given by javascript call, that is also mean that users with javascript disabled, or get javascript error on page, also fail to send that info to google.
The block can be done either using antivirus/anti malware programs, ether direct by add some site names on the host file of the system and changing the IP to localhost so they fail to run. Its an easy trick if you try to avoid some sites that monitor you.
If you wondering if there is a way to always give that infos regarding if the user blocks it, yes there is, you send that informations on code behind direct on google server, but its a little complicate.
And one tip - better keep that infos for your only.
Related
I'd like to write a very simple web traffic blocker for my kids using C#. I'm an experienced C# developer. I'd like the following basic features:
The whitelist is linked to a specific Windows 10 user account (the program would know who is logged in)
All websites blocked unless specifically whitelisted
This little piece of software cannot be shutdown or uninstalled except by the admin
When a blocked site is attempted to be accessed, a popup would come up from my program that allows the user to request access.
This program needs to be lower-level than any browser, so it blocks traffic in all browsers, and even programs that embed browser controls or browser-like capabilities. I'm guessing the OS is aware when web requests are initiated and which sites are being requested, so I need to know how to tap into that OS-level event.
I've never done any programming that blocks web traffic so I'm looking for a source code example of doing this. Also how can I prevent the software from being stopped or removed, unless a password is supplied?
I'm asking for advice and to be pointed in the right direction...not for any other help. Minimal code samples or links that demonstrate how to accomplish my requirements would be very helpful and greatly appreciated.
Note: I have google searched and can't find anything that quite works the way I want, so I'd like to build my own. Thanks in advance.
I have a situation recently identified by the users of my app which i did more than 2 years ago.
To cut the matter short there is a URL link sent to the customer on click of which some things executes based upon the encrypted key that is passed along with the URL query string. Earlier it was working fine because the user had to click on the link in the SMS to execute that. But now a days the SMS clients for example in iPhone or something similar they pick up the URL and try to show the preview (similar to what what's app skype etc do). But the problem is that the link is only one time and on a second click the link is already expired because it is assumed that its already hit.
So, in this situation the user is never able to go to the next step since the link is already consumed in the form of the message preview.
I have a work around for the same for example to show a fake page or something similar but i do not want to use that since i understand that this think i quite common and you genius folks out there have something to share.
Please share how to may be identify the client which are just looking for og tags or how to identify these kinds of clients so that the actual request is not processed unless is done manually by the user by clicking on the link.
As far as I know there is no consistent user agent clients have to use in the open graph spec.
Therefore blocking based on that is an ever moving target, each app could use a different agent if they so wish.
The way I have always countered to this is that a get action should never be a destructive action.
The get should always be safe to run over and over.
If you need a destructive action, the page should include some form of user input/button/link which would trigger a post to the server.
If required you can then also add an added level of security in the link by asking the user to confirm something from the data, e.g. their phone number.
This means if a link was to get into the wrong hands (remember, SMS are not encrypted so can be snooped) then without this information the user is required to enter they will be unable to execute the destructive action of the link.
A client came to me with a request to have a web app that does a lot of processing in the backend (reads from a file, writes to a web service). My question is that since this "process" (when the user clicks 'Go') may take hours, how do I make it so the processing continues after the user closes the web page? Please let me know if this does not make any sense and I can give more information. Any help would be greatly appreciated, thanks!
You have to create MS Windows Service for it.
You have provide for that Service some database table which client is going to use by your Website functionality.
This is, basically, a "batch job" requirement, and that's how I suggest that you approach it.
The client would use the web-page, not to perform the work, but rather to manage the list of batch jobs that are performing the work .. scheduling them, starting or stopping them, viewing their status and current output, and so on. (Yup, just like they did it in the 1970's, but without "//FOOBAR JOB (123,456)" ... thank god.
The user goes to the web-page and enters a request. A back-end service process on the host (the batch job manager...) now starts the job, perhaps on one computer; perhaps on several at a time. The user can, meanwhile, log off the web-site and do whatever he pleases. Then, he can come back at any time, go back to whatever the job-monitoring web page may be, and see how things are going. Stop the job, suspend/resume, what have you.
There are lots of batch-job monitoring tools out there already, for all sorts of environments, both free and commercial. So, it's not like you have to build all this stuff; you merely have to identify what off-the-shelf package works best for you and for your client.
The best possible solution will be to do the work in a windows service, and use your web app just to trigger the processing.
In my asp.net mvc application I need to allow only one click, or rather calculation triggered by the button click only once per IP. E.g : Each user can vote only once. Is it possible to do this with C#? Where is a right place to start please ? Let me know if I need to rephrase the question.Thanks!
There are numerous ways to do this, that I have done with my app.
First IP address is a good way, though not 100% as many people can share a single IP address and also people can restart their connections and get a new IP address. So storing voters IP addresses in the database is a good start.
Second you can use cookies. That again isn't a 100% secure approach as people can delete them. However you can use this in conjuction with IP address.
Third if you have a Facebook app, this is where you can get the best security. You just check the Facebook UserId. You can do this if you make your voters login with Facebook as well, even if your app isnt a custom Facebook app on a page tab.
So for coding sakes you need a database to store these values and check against when viewing. However in a high traffic site it lends itself to being very DB heavy, so some caching is also a good start (if you are on shared grid/webfarm hosting make sure your cache is not inprocess)
Yes, Totally agreed with Joel Coehoom.
I would suggest to use MAC address in this case.
For help about this try these following links:
PhysicalAddress
Get MAC address of client machine using C#
I'm writing a video cms and want all my users to have new assets displayed immediately as they come in.
If I'm right, facebook updates its wall-page in realtime. So when I post something to a friend it immediately displays on his wall. The realtime web, as they say.
I wonder how you do that? Not the technology of client-server-communication, but what goes on on the server.
I understand the principles of the observer-pattern.
But a wall is in fact a query on a table of messages.
How does the observer know what query a user is interested in?
Does it hold all the query's of all connected users and reruns it when something new comes in.
I believe Google-realtime works that way to.
Thank you for helping me out.
When you open facebook, open the script timeline in your browser to see what scripts are executing on the page. You'll notice that there is a polling script being executed several times a second. So the page is looking at the cache several times a second to see if there is any new information that can be displayed.
http://www.ajaxwith.com/Poll-vs-Push-Technology.html - this should give you a background on the subject.
Facebook uses AJAX and a JavaScript timer that polls in the background looking for anything that's changed. Other sites use the same type of functionality to update stock quotes embedded in the page, etc. It's not truly updating immediately, it's updating as frequently as the JavaScript timer hits their server. This is because web browsers use HTTP, which is a request/response protocol. A browser won't display anything that's not as a direct response to a request initiated by the browser; there's no way to just send content directly to the browser from your webserver.