Hundreds of pins in static google map api - c#

This works fine:
static void Main(string[] args)
{
string latlng = "55.379110,-3.1420898";
string url = "http://maps.googleapis.com/maps/api/staticmap?center=" + latlng +
"&zoom=6&size=1000x1000&maptype=satellite&markers=color:blue%7Clabel:S%7C" +
latlng + "&sensor=false";
using (WebClient wc = new WebClient())
{
wc.DownloadFile(url, #"C:\Bla\img.png");
}
}
Just wondering, how could I add hundreds of pins and save the map as a png? Surely there is a limit for get request and one can not append too many query string parameters.
PS: The limit is 8192 characters - see https://developers.google.com/maps/documentation/static-maps/intro section URL Size Restriction

I'm afraid downloading and storing static maps images is against ToS:
You may not store and serve copies of images generated using the Google Static Maps API from your website. All web pages that require static images must link the src attribute of an HTML img tag or the CSS background-image attribute of an HTML div tag directly to the Google Static Maps API so that all map images are displayed within the HTML content of the web page and served directly to end users by Google.
https://developers.google.com/maps/faq?csw=1#tos_staticmaps_reuse

You will have to deal with an url size restriction from Google Map Static API.
https://developers.google.com/maps/documentation/static-maps/intro#url-size-restriction
URL Size Restriction
Google Static Maps API URLs are restricted to 8192 characters in size.
In practice, you will probably not have need for URLs longer than
this, unless you produce complicated maps with a high number of
markers and paths. Note, however, that certain characters may be
URL-encoded by browsers and/or services before sending them off to the
API, resulting in increased character usage. For more information, see
Building a Valid URL.
To Add multiple marker:
https://developers.google.com/maps/documentation/static-maps/intro
https://maps.googleapis.com/maps/api/staticmap?center=Brooklyn+Bridge,New+York,NY&zoom=13&size=600x300&maptype=roadmap&markers=color:blue%7Clabel:S%7C40.702147,-74.015794&markers=color:green%7Clabel:G%7C40.711614,-74.012318&markers=color:red%7Clabel:C%7C40.718217,-73.998284&key=YOUR_API_KEY

You may also be interested in: Issue 207: KML layer in Static Maps API

Related

How to get number of followers of an instagram page without API

I am trying to get number of followers of a page programmatically either of exact number (eg 521356) or (521K) would do.
I have tried download data to download the entire page but I couldn't seem to find number of followers
System.Net.WebClient wc = new System.Net.WebClient();
byte[] raw = wc.DownloadData("https://www.instagram.com/gallery.delband/");
string webData = System.Text.Encoding.UTF8.GetString(raw);
textBox1.Text = webData;
I would like to be able to get number of followers but I can't find the data using web browser method.
The problem is, that you cannot get the instagram webpage like you see it in the browser without executing JavaScript. And System.Net.WebClient does not execute js.
But if you analyse the html source of the page, you'll see that the followers count is included within a <meta> tag with name="description":
<meta content="88.5k Followers, 1,412 Following, 785 Posts - See Instagram photos and videos from گالری نقره عیار ۹۲۵‌‌ ترکیه (#gallery.delband)" name="description" />
To grab this information from the source, use a regex:
var pattern = #"<meta content=\""([0-9k KMm\.,]+) Followers, .*\"" name=\""description\"" \/>";
var match = Regex.Match(webData, pattern);
var followers = match.Groups[1];
The pattern means: Find a string, that starts with <meta content=", followed by a dynamic string of the characters 0-9, k, K, M, m, ',', '.' or ' ' (the actual followers count) followed by the text " Followers", then any text, but ending with name="description" />. Because we parenthesized the dynamic part, the regex system is giving us this dynamic value as a group result.
WebClient just makes a simple HTTP request which will return very little for a lot of sites these days. You basically get a page that tells the browser "Great, now get that javascript bundle over there to get started". So to get the information you are after you'll need something more advanced like CefSharp to actually load the page and execute scripts and everything. Preferably you'd use CefSharp.OffScreen as to not show a browser window. Then you can parse out the information you wanted.

Get location name by giving ZIP codes

I need to display the location and city name when a user enters a ZIP Code. How do I get the corresponding location names?
I would use a website like
http://www.zipinfo.com/search/zipcode.htm
and just send the zipcode to that, retrieve the input, parse for the city name, easy as that.
Try the USPS zipcode API - http://www.usps.com/webtools/welcome.htm
You can use the PlaceFinder geocoding web service to make REST based requests using the postal code you want to resolve to a name. The service supports both XML and JSON response formats. Here is a listing of the response elements returned by the service.
Using .NET, you would leverage the client or request/response classes in the System.Net namespace to make a request to the service and process the reponse.
The simplest way would be to use strings. You could alternatively create a ZIP class, if you wanted to get fancy.
using System;
using System.Collections.Generic;
class Program
{
// declare your variable
private static Dictionary<string, string> zipLookup;
public static void CreateZips()
{
zipLookup = new Dictionary<string, string>();
zipLookup.Add("90210", "Beverly Hills");
// fill all other values, probably from a db
}
static void Main(string[] args)
{
CreateZips();
var test = "90210";
if (zipLookup.ContainsKey(test))
{
Console.WriteLine(test.ToString() + "=" + zipLookup[test]);
}
else
{
Console.WriteLine(test.ToString() + " location unknown");
}
}
}
For more details on ZIPs, check out Wikipedia
I work in the address verification industry for a company called SmartyStreets. The solutions presented here are all functional in a variety of ways, but beware of their limitations and specialties. For example, Yahoo's service is more like address suggestion, not validation. The USPS web service is quite limited in the results it returns, for example: you won't get the County and Component data of an address, actual deliverability, etc.
For a more flexible, free solution -- may I suggest our LiveAddress API? It's a REST-ful endpoint which, given a street address (for example) and ZIP code, will fully and accurately complete the entire address.
Alternatively, you can use https://thezipcodes.com/
This has almost all the data that I used for the search.
Use http://thezipcodes.com/docs to use the API.

Cache images provided through script

I have a script, which by using several querystring variables provides an image. I am also using URL rewriting within IIS 7.5.
So images have an URL like this:
http://mydomain/pictures/ajfhajkfhal/44/thumb.jpg
or
http://mydomain/pictures/ajfhajkfhal/44.jpg
This is rewritten to:
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44&thumb=thumb.jpg
or
http://mydomain/Picture.aspx?group=ajfhajkfhal&id=44
I added caching rules to IIS to cache JPG images when they are requested. This works with my images that are REAL images on the disk. When images are provided through the script, they are somehow always requested through the script, without being cached.
The images do not change that often, so if the cache at least is being kept for 30 minutes (or until file change) that would be best.
I am using .NET/C# 4.0 for my website. I tried setting several cache options in C#, but I cant seem to find how to cache these images (client-side), while my static images are cached properly.
EDIT I use the following options to cache the image on the client side, where 'fileName' is the physical filename of the image (on disk).
context.Response.AddFileDependency(fileName);
context.Response.Cache.SetETagFromFileDependencies();
context.Response.Cache.SetLastModifiedFromFileDependencies();
context.Response.Cache.SetCacheability(HttpCacheability.Public);
context.Response.Cache.SetExpires(DateTime.Now.AddTicks(600));
context.Response.Cache.SetMaxAge(new TimeSpan(0, 5, 0));
context.Response.Cache.SetSlidingExpiration(true);
context.Response.Cache.SetValidUntilExpires(true);
context.Response.ContentType = "image/jpg";
EDIT 2 Thanks for pointing that out, that was indeed a very stupid mistake ;). I changed it to 30 minutes from now (DateTime.Now.AddMinutes(30)).
But this doesnt solve the problem. I am really thinking the problem lies with Firefox. I use Firebug to track each request and somehow, I am thinking I am doing something fundamentally wrong. Normal images (which are cached and static) give back an response code "304 (Not Modified)", while my page always gives back a "200 (OK)".
alt text http://images.depl0y.com/capture.jpg
If what you mean by "script" is the code in your Picture.aspx, I should point out that C# is not a scripting language, so it is technically not a script.
You can use the Caching API provided by ASP.NET.
I assume you alread have a method which contains something like this. Here is how you can use the Caching API:
string fileName = ... // The name of your file
byte[] bytes = null;
if (HttpContext.Current.Cache[fileName] != null)
{
bytes = (byte[])HttpContext.Current.Cache[fileName];
}
else
{
bytes = ... // Retrieve your image's bytes
HttpContext.Current.Cache[fileName] = bytes; // Set the cache
}
// Send it to the client
Response.BinaryWrite(bytes);
Response.Flush();
Note that the keys you use in the cache must be unique to each cached item, so it might not be enough to just use the name of the file for this purpose.
EDIT:
If you want to enable caching the content on the client side, use the following:
Response.Cache.SetCacheability(HttpCacheability.Public);
You can experiment with the different HttpCacheability values. With this, you can specify how and where the content should be cached. (Eg. on the server, on proxies, and on the client)
This will make ASP.NET to send the client the caching rules with the appropriate HTTP headers.
This will not guarantee that the client will actually cache it (it depends on browser settings, for example), but it will tell the browser "You should cache this!"
The best practice would be to use caching on both the client and the server side.
EDIT 2:
The problem with your code is the SetExpires(DateTime.Now.AddTicks(600)). 600 ticks is only a fraction of a second... (1 second = 10000000 ticks)
Basically, the content gets cached but expires the moment it gets to the browser.
Try these:
context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(5));
context.Response.Cache.SetMaxAge(TimeSpan.FromMinutes(5));
(The TimeSpan.FromMinutes is also more readable than new TimeSpan(...).)

C# Screen Scraper - Handle long uri's

I'm building a html screen scraper, which parses urls, and then compare those with a set of other urls.
The comparison is done with Uri.AbsoluteUri or Uri.Host.
My problem is that when i'm creating a new Uri (new Uri(url)), an UriFormatException is thrown when the url is to long, or contains to many slashes.
Since my predefined set of urls contains several (to) long urls, I cannot just use substring to only fetch a part of the url.
What would be the best way to handle this?
Thanks
You can use Uri.TryCreate to check if the URI is valid before you new it.
You should not get an exception on a url this is so short. The folowing program runs well on VS2008:
static void Main(string[] args)
{
Uri uri = new Uri("http://stackoverflow.com/questions/1298985/c-screen-scraper-handle-long-uris/c-screen-scraper-handle-long-uris/c-screen-scraper-handle-long-uris/c-screen-scraper-handle-long-uris/c-screen-scraper-handle-long-uris/c-screen-scraper-handle-long-uris/c-screen-scraper-handle-long-uris/c-screen-scraper-handle-long-uris/");
Uri uri2 = new Uri("http://stackoverflow.com/questions/1298985/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/");
Console.ReadLine();
}

Truncating Query String & Returning Clean URL C# ASP.net

I would like to take the original URL, truncate the query string parameters, and return a cleaned up version of the URL. I would like it to occur across the whole application, so performing through the global.asax would be ideal. Also, I think a 301 redirect would be in order as well.
ie.
in: www.website.com/default.aspx?utm_source=twitter&utm_medium=social-media
out: www.website.com/default.aspx
What would be the best way to achieve this?
System.Uri is your friend here. This has many helpful utilities on it, but the one you want is GetLeftPart:
string url = "http://www.website.com/default.aspx?utm_source=twitter&utm_medium=social-media";
Uri uri = new Uri(url);
Console.WriteLine(uri.GetLeftPart(UriPartial.Path));
This gives the output: http://www.website.com/default.aspx
[The Uri class does require the protocol, http://, to be specified]
GetLeftPart basicallys says "get the left part of the uri up to and including the part I specify". This can be Scheme (just the http:// bit), Authority (the www.website.com part), Path (the /default.aspx) or Query (the querystring).
Assuming you are on an aspx web page, you can then use Response.Redirect(newUrl) to redirect the caller.
Here is a simple trick
Dim uri = New Uri(Request.Url.AbsoluteUri)
dim reqURL = uri.GetLeftPart(UriPartial.Path)
Here is a quick way of getting the root path sans the full path and query.
string path = Request.Url.AbsoluteUri.Replace(Request.Url.PathAndQuery,"");
This may look a little better.
string rawUrl = String.Concat(this.GetApplicationUrl(), Request.RawUrl);
if (rawUrl.Contains("/post/"))
{
bool hasQueryStrings = Request.QueryString.Keys.Count > 1;
if (hasQueryStrings)
{
Uri uri = new Uri(rawUrl);
rawUrl = uri.GetLeftPart(UriPartial.Path);
HtmlLink canonical = new HtmlLink();
canonical.Href = rawUrl;
canonical.Attributes["rel"] = "canonical";
Page.Header.Controls.Add(canonical);
}
}
Followed by a function to properly fetch the application URL.
Works perfectly.
I'm guessing that you want to do this because you want your users to see pretty looking URLs. The only way to get the client to "change" the URL in its address bar is to send it to a new location - i.e. you need to redirect them.
Are the query string parameters going to affect the output of your page? If so, you'll have to look at how to maintain state between requests (session variables, cookies, etc.) because your query string parameters will be lost as soon as you redirect to a page without them.
There are a few ways you can do this globally (in order of preference):
If you have direct control over your server environment then a configurable server module like ISAPI_ReWrite or IIS 7.0 URL Rewrite Module is a great approach.
A custom IHttpModule is a nice, reusable roll-your-own approach.
You can also do this in the global.asax as you suggest
You should only use the 301 response code if the resource has indeed moved permanently. Again, this depends on whether your application needs to use the query string parameters. If you use a permanent redirect a browser (that respects the 301 response code) will skip loading a URL like .../default.aspx?utm_source=twitter&utm_medium=social-media and load .../default.aspx - you'll never even know about the query string parameters.
Finally, you can use POST method requests. This gives you clean URLs and lets you pass parameters in, but will only work with <form> elements or requests you create using JavaScript.
Take a look at the UriBuilder class. You can create one with a url string, and the object will then parse this url and let you access just the elements you desire.
After completing whatever processing you need to do on the query string, just split the url on the question mark:
Dim _CleanUrl as String = Request.Url.AbsoluteUri.Split("?")(0)
Response.Redirect(_CleanUrl)
Granted, my solution is in VB.NET, but I'd imagine that it could be ported over pretty easily. And since we are only looking for the first element of the split, it even "fails" gracefully when there is no querystring.

Categories

Resources