Creating Chatroom in ASP.net - c#

I am trying to create a basic chat room just to enhance my programming and logical skills, but I can't figure out the functionality here.
The question which is bothering me is whether should I include database or not?
(p.s: I don't want to record any chat sessions).
I tried on my own by using Application["variable"] to post messages like.
Application["Message"] = txtMessage.text;
txtDisplay.text = txtDisplay.text + Application["Message"].ToString();
I know this is not the correct way, there will be some limits to store huge amount of messages and can't be implemented on large traffic. I tested it on LAN and it worked fine, but need a proper way to complete my project.
Need a kick start.

If you want a proper solution of the chat utility (with latest technologies including ASP.net MVC), you should consider WebSockets [http://www.codeproject.com/Articles/618032/Using-WebSocket-in-NET-Part] and SignalR [http://www.asp.net/signalr].

Related

Sharing gamefiles between players

I try to create a game that should help people learn for university. My problem is, how can player share questions between phones and also get them into the right folder. Is there a way for setting up a server to upload files and then download them? I had something in mind like the Levelcode system from "Mario Maker".
If a system like the one in "Mario Maker" would work, what kind of server do I need? I know I need some kind of a Database but do I need something like mySQL? And also how do I set it up? I never learned server programming, but I'm eager to learn.
You need to set up and host a REST API.
There are many different ways to go about it, and it's really up to your preference for your host service, programming language, and a database library.
If you have no preference. I'd recommend:
AWS - powerful and you get 12 months of free hosting
Golang - modern, fast, and great for web apps
SQLite (go-sqlite3) - simple and lightweight
You will need to setup handlers for requests. So for example:
To add a new level:
POST example.com/level/
To get an existing level:
GET example.com/level/:id
In Golang you can handle a request using:
func levelHandler(w http.ResponseWriter, r *http.Request) {
switch r.Method {
case "GET":
// find level id in the database
case "POST":
// add level to database
}
func main() {
http.HandleFunc("/level", levelHandler)
http.ListenAndServe(":8080", nil);
}
To modify an existing level:
PUT example.com/level/:id
The problem now is that anyone in the world will be able to change any level. You will want to add user authentication, and that's a whole other can of worms.
You may consider using REST asset for Unity.
Just one final note: server programming is hard, be prepared to spend a lot of time trying, failing, and learning.

S3 Request time too skewed

I am currently building an application in C# that makes use of the AWS SDK for uploading files to S3.
However, I have some users who are getting the "Request time too skewed" error when the application tries to upload a file.
I understand the problem is that the user's clock is out of sync, however, it is difficult to expect a user to change this, so I was wondering, is there any way to get this error not to occur (any .NET functionality to get accurate time with NTP or the alike?)
Below the current code I am using to upload files.
var _s3Config = new AmazonS3Config { ServiceURL = "https://s3-eu-west-1.amazonaws.com" };
var _awsCredentials = new SessionAWSCredentials(credentials.AccessKeyId, credentials.SecretAccessKey, credentials.SessionToken);
var s3Client = new AmazonS3Client(_awsCredentials, _s3Config);
var putRequest = new PutObjectRequest
{
BucketName = "my.bucket.name",
Key = "/path/to/file.txt",
FilePath = "/path/to/local/file.txt"
};
putRequest.StreamTransferProgress += OnUploadProgress;
var response = await s3Client.PutObjectAsync(putRequest);
Getting the time from a timeserver is actually the easier part of your challenge. There is no built-in C# functionality that I'm aware of to get an accurate time from a time server, but a quick search yields plenty of sample code for NTP clients. I found a good comprehensive sample at dotnet-snippets.com (probably overkill for your case), and a very streamlined version on Stack Overflow in a page titled "How to Query an NTP Server using C#?". The latter looks like it might be effective in your case, since all you need is a reasonably accurate idea of the current time.
Now on to the real challenge: using that time with Amazon S3. First, some background, as it's important to understand why this is happening. The time skew restriction is intended to protect against replay attacks, as noted here:
http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html
Because of this, Amazon built the current timestamp into the authentication signature used in the AWS SDK when constructing the HTTP(S) request. However, the SDK always uses the current time (there's no way to override it in the SDK methods):
https://github.com/aws/aws-sdk-net/blob/master/AWSSDK_DotNet35/Amazon.Runtime/Internal/Auth/AWS3Signer.cs#L119
Note that in all cases, the SDK uses AWSSDKUtils.FormattedCurrentTimestampRFC822 as the timestamp, and there's no way for the caller to pass a different value into the method.
So this leaves you with two options that I can see:
Bypass the Amazon SDK and construct your own HTTP requests using the time you retrieve from an NTP server. This is doable but not easy. Amazon discourages this approach because the SDK provides a lot of helpful wrappers to ensure that you're using the API as a whole correctly, handling a lot of the tedious message processing that you have to do yourself if you go straight HTTP. It also helps with a lot of the error handling and ensuring that things get cleaned up properly if a transfer is interrupted.
Clone the Amazon SDK git repository and create your own fork with a modification to allow you to pass in the current time. This would also be difficult, as you'd have to work out a way to pass the time down through several layers of API objects. And you'd lose the benefit of being able to easily update to a new SDK when one is available.
Sorry there's no easy answer for you, but I hope this helps.
If you're asking this question you have probably taken AWS as far as you can go with the provided code sample.
I have found most of the async upload functionality provided by AWS to be more theoretical, or better suited for limited use cases, instead of being production ready for the mainstream- especially end users with all those browsers and operating systems:)
I would recommend rethinking the design of your program: create your own C# upload turnstile and keep the AWS SDK upload functions running as a background process (or sysadmin function) so that AWS servers are handling only your server's time.

Ajax Chat long polling

After reading this post, I decided to write my own chat application.
Differently from the above post, my application allows more polling, for instance when user presses any key (in order to inform the other one that user1 is writing something) and obviously when a user sends a message.
This causes some problems: often no-one notification is read correctly and the sent message isn't always read from the other side.
It could be great if there was some way to send and receive different notificaion types (message, alert about new writing, new user joined and so on...).
How can I solve this?
Signal R is the solution to your problem. I understand that you want to develop your own solution and that the intrigue can be enticing BUT please consider looking into SignalR - being able to get to grips with and manipulate SignalR will pay dividends and allow you to solve similar problems much more easily - its a great tool to add to your development arsenal.
In fact by all means continue developing your solution but give SignalR the once over for something else or another project it really is worth looking at as the de facto method of achieving this type of client server communication within .net. It can be found on NuGet using the link below so its only a few clicks away!
http://nuget.org/packages/SignalR
I'm glad to inform you my chat app are working now.
The problem was about two call to wcf service in the same javascript eventhandler (send message button, where I notiified the new message and an alert such as "user is not writing anything", yeah, I needed to reset the previous alert ("user is writing a message..").
Now I'm be able to send and receive many notification and all works fine. I've tested it with 10 chat page about.
Surely I know I can achieve more functionality and stability by using the framework you have suggested me, but I'm happy to have found a relative simple, customizable and good solution for my purpose.

Scrape google's all search results based on certain criteria?

I am working on my mapper and I need to get the full map of newegg.com
I could try to scrap NE directly (which kind of violates NE's policies), but they have many products that are not available via direct NE search, but only via google.com search; and I need those links too.
Here is the search string that returns 16mil of results:
https://www.google.com/search?as_q=&as_epq=.com%2FProduct%2FProduct.aspx%3FItem%3D&as_oq=&as_eq=&as_nlo=&as_nhi=&lr=&cr=&as_qdr=all&as_sitesearch=newegg.com&as_occt=url&safe=off&tbs=&as_filetype=&as_rights=
I want my scraper to go over all results and log hyperlinks to all these results.
I can scrap all the links from google search results, but google has limit of 100 pages for each query- 1,000 results and again, google is not happy with this approach. :)
I am new to this; Could you advise / point me in the right direction ? Are there any tools/methodology that could help me to achieve my goals?
I am new to this; Could you advise / point me in the right direction ?
Are there any tools/methodology that could help me to achieve my
goals?
Google takes a lot of steps to prevent you from crawling their pages and I'm not talking about merely asking you to abide by their robots.txt. I don't agree with their ethics, nor their T&C, not even the "simplified" version that they pushed out (but that's a separate issue).
If you want to be seen, then you have to let google crawl your page; however, if you want to crawl Google then you have to jump through some major hoops! Namely, you have to get a bunch of proxies so you can get past the rate limiting and the 302s + captcha pages that they post up any time they get suspicious about your "activity."
Despite being thoroughly aggravated about Google's T&C, I would NOT recommend that you violate it! However, if you absolutely need to get the data, then you can get a big list of proxies, load them in a queue and pull a proxy from the queue each time you want to get a page. If the proxy works, then put it back in the queue; otherwise, discard the proxy. Maybe even give a counter for each failed proxy and discard it if it exceeds some number of failures.
I've not tried it but you can use googles custom search API. Of course, its starts to cost money after 100 searches a day. I guess they must be running a business ;p
It might be a bit late but I think it is worth to mention that you can professionally scrape Google reliable and not cause problems with it.
Actually it is not of any threat I know about to scrape Google.
It is cahllenging if you are unexperienced but I am not aware about a single case of legal consequence and I am always following this topic.
Maybe one of the largest cases of scraping happened some years ago when Microsoft scraped Google to power Bing. Google was able to proof it by placing fake results which do not exist in real world and Bing suddenly took them up.
Google named and shamed them, that's all that happened as far as I remember.
Using the API is rarely ever a real use, it costs a lot of money to use it for even a small amount of results and the free amount is rather small (40 lookups per hour before ban).
The other downside is that the API does not mirror the real search results, in your case maybe less a problem but in most cases people want to get the real ranking positions.
Now if you do not accept Googles TOS or ignore it (they did not care about your TOS when they scraped you in their startup) you can go another route.
Mimic a real user and get the data directly from the SERPs.
The clue here is to send around 10 requests per hour (can be increased to 20) with each IP address (yes you use more than one IP). That amount has proven to cause no problem with Google over the past years.
Use caching, databases, ip rotation management to avoid hitting it more often than required.
The IP addresses need to be clean, unshared and if possible without abusive history.
The originally suggested proxy-list would complicate the topic a lot as you receive unstable, unreliable IPs with questionable absuive use, share and history.
There is an open source PHP project on http://scraping.compunect.com which contains all the features you need to start, I used it for my work which now runs for some years without troubles.
Thats a finished project which is mainly built to be used as customizable base of your project but runs standalone too.
Also PHP is not a bad choice, I originally was sceptical but I was running PHP (5) as background process for two years without a single interruption.
The performance is easily good enough for such a project so I would give it a shot.
Otherwise, PHP code is like C/JAVA .. you can see how things are done and repeat them in your own project.

Ship maritime AIS information API

Is there an API or Web Service that can be used to read AIS data? Most links I read starting at Wikipedia (http://en.wikipedia.org/wiki/Automatic_Identification_System) say that AIS data is freely available but I'm having a hard time finding a provider of the data. A C# example or language agnostic web service would be helpful.
Building a project map for a clients website. Basically a world map based on the google maps api with pin's where they did their projects and if you click on a pin you get additional information about the project.
Most were just static addresses which was ok, but they did 6 project's on luxury yachts. So had the idea to base this marker on the current position of the yacht. Came across this service, they have a nice API for it.
https://www.marinetraffic.com
The down side to this, it's a bit pricey.
Cheapest option, checking daily position of 1 ship -> € 5,- a month.
So this would be € 30,- a month for a relative useless but awesome feature.
Cheaper alternative's are welcome.
I ended up using vesseltracker.com for this project. Unfortunately it's a "call us for a price" service so I'll continue looking for a provider with a flat/reasonable/free rate.
There is a feed from the San Fransisco Bay available for non-commercial use at hd-sf.com:9009.
I have used it to test my Java-based AIS decoder https://github.com/tbsalling/aismessages/wiki.
AIS data is freely available in the sense that you can freely receive it with the proper equipment, just by holding up an antenna in an area with shipping traffic.
Samples of received AIS data popped up quite a bit in my brief Google search, so I assume that your question is about where to get a real-time feed of AIS messages (that someone else is receiving). Whether you'd be able to get this at no cost is questionable; most organizations that would offer this seem to want you to either pay for the service or to share in kind.
There are a few places that offer a free stream, but none of them seem to offer any guarantees on availability in the short or long term.
So the answer to your question is "yes, and you should expect to pay something for it".

Categories

Resources