how to access to database since another pc's? (in any place) - c#

well I need to do 2 applications, one to users registers theirself (where the server is) and the another application is for the administrador can see who has register, but he doesn't have time to go (where the server is) so he need watch the information since other place, I haven't could do it, my another idea is using web services, but i dont know how to public the web service in internet for consum it since anothers pc's, then for application 2, how will be the connection string? i thoung with ip, but in what is your ip address i watched the ip is dinamic, and it change with turn on the computer (where is the server), then, how can i do it? (i coulnd't to have a connection with ip, and i couldn't with name of computer)

[heavily edited - hope I maintained the spirit] I want to use a web service, but I don't have a great way to connect to it. I thought about using the IP address, but the IP address is dynamic
If you have servers that must remain available, they must be registered in some form of hostname lookup service, such as DNS. Most people do this by maintaining a corporate intranet, using a hosting service to serve their database/application, or using a dynamic DNS service such as DynDNS.
[heavily edited...] I need two applications, one for users to register themselves, and the other for the administrator to see who has registered, without going to the server...
If this information is just for humans to read:
You could simply create a shared source of this information, such as an internal Wiki, or an Excel document on a Windows network share, or Microsoft Sharepoint site, and let users write new entries to it. The admin would simply read that document to find out the information. This is much more light-weight, and could be reused for many other purposes.
There is a lot of existing free wiki/CMS software you could install and use for this purpose.
If you need this to be accessible by other programs, rather than just human readers:
You'll have to make some sort of database, and possibly a web service to access it. Unfortunately, you'll need a lot more information for anyone to give you a good answer for these needs. Any answer will make a lot of assumptions, and might put you in a bad spot in terms of scalability, performance, security, or reliability.
Some basic questions to get you started (certainly not a complete list) - How many users? Where will they be located with respect to your servers (both this application you are writing, and the servers that the application seems to provide information about)? How safe must the information be? How much data? But even with this data, it is hard to recommend any sort of application design or network topology without knowing all of your requirements.
If you need this to be reliable and secure (e.g. you're supporting more than just yourself and a couple users), you'll probably need to turn this into a serious project, and devote business research, design, development, and IT resources to it. These resources can all be one person, but you should really go through all the motions if you don't want it to be an unmaintainable, insecure mess.

Related

How can I certify the client?

I have the following scenario: a client connects over tcp to a server. The client sends his credentials to the server (password, username, mac address). The server validates the credentials and handles the client if the data is correct.
But is this right? I want to ensure that only one (on the user profile registered) computer can use this client. That means the client and the computer must be identified. I'm be sure that my suggestion above are pretty wrong. But how I can do this better?
This is a tricky problem, for as the comments pointed out, users may fake any machine information to pretend to be another computer.
What I would recommend is that you hash the machine information (e.g. with SHA-256), so that it isn't immediately obvious what information you use to identify a computer. Of course this can always be learned by attackers in multiple ways (monitoring, disassembly, etc).
Here are some tips on which data you could use to uniquely identify a machine. I would pick several characteristics, put them all together and then hash them. This of course means that if the user changes e.g. his hard drive (and you use its serial to identify the computer), then he cannot connect anymore. I suppose you will need to offer a "re-create key" function anyway in case users switch or modify their computer.
This approach makes it harder to trick your system, by using multiple pieces of information and hashing them, forcing users to a) figure out what information you use and b) how you hash it (definitely use a salt).
However, it still is very possible to do just that. The question now is: how high are your requirements? A typical user won't be able to bypass this, if that's all you want then it should be sufficient.
I'm not sure whether there can even be a "perfect" solution to this problem, as you want to protect your system from your very user. This means that all encryption keys, certificates or whatever you use is known by and available to the user. On top of that, users have access to your client application and analyze it. They can modify their computer in ways you cannot prevent of forsee. All in all, I think the best you can do is make it a huge pain in the backside to bypass your guards, so much that nobody will want to bother with it / you can minimize the number of users that may do so.
The only other thing I can think of is fancy logging and monitoring on your server, e.g. to detect that a user connected multiple times with his one machine, then alerting you or aborting all but one connection. Again, this can only reduce abuse, but not completely prevent it.

Send outlook instance over network

I've got an Outlook tool that works on the client(PC-A) and this tool set some folder permission. Now I want to set the settings remotely over my computer (PC-B) so I dont have to go to all employers.
I've searched in google but there aren't any useful information for me and beside this I don't know how to code this. A friend told me that I can use a service for this or code a server/client that listen to PC-A.
Can somebody help me?
There are more than one way to do it. I am sorry I don't have specific code examples but simple steps on how to. Maybe you can expand on that.
Assumption : Clients and your server are in a LAN and in the same domain.
Solution1: You can have a daily or scheduled job collating the settings you need and pushing it to a centralized DB. The server (your machine) can then poll the centralized DB for the settings. Depending on how you design the table design you can have the client module change the settings based on the settings you make on the server. Since everything is on the centralized DB, client and the server hit the DB to get the information. A little complex but not much simpler to understand.
Solution2: Using the System.Net.Sockets to create a custom server and client listening on specific ports. Tech.pro has a good article on it.
Assumption : Your clients are on Internet and you are also on the world wide web and not in the same domain.
Solution1: The DB approach seems quite solid and it gives you the ability to maintain different settings for different users and have more customized approach. You can push and pull the data as Json so that the network bandwidth is not heavily utilized.
Solution: TCP approach should work good assuming you are connected directly to the internet and not through proxy. I am not sure about hits approach but it is one way.
Alternatively you can implement Solution 1 or 2 using a service, but personally I would prefer a process running on my machine only when it needs to.
Feel free to correct me.

ASP.NET Cache Management

I have three applications running in three separate app pools. One of the applications is an administrative app that few people have privileged access to. One of the function the administrative app allows is creating downtime notices. So when a user goes into the administrative app and creates a downtime notice the other two apps are supposed to pick up on there being a new notice and display it on the login page.
The problem is that these notices are cached and being that each app is in a separate app pool the administrative app doesn't have any way to clear the downtime notices cache in the other two applications.
I'm trying to figure out a way around this. The only thing I can think of is to insert a record in the DB that denotes the cache needs to be cleared and the other two apps will check the DB when loading the login page. Does anyone have another approach that might work a little cleaner?
*Side note, this is more widespread than just the downtime notices, but I just used this as an example.
EDIT
Restarting the app pools is not feasible as it will most likely kill background threads.
If I understand correctly, you're basically trying to send a message from the administrative app to other apps. Maybe you should consider creating WCF service on these apps that could be called from the administrative application. That is a standard way to communicate between different apps if you don't want to use e.g. shared medium such a database and it doesn't force you to use polling model.
Another way to look at this is that this is basically an inter-application messaging problem, which has a number of libraries already out there that could help you solve it. RabbitMQ comes to mind for this. It has a C# client all ready to go. MSMQ is another potential technology, and one that already comes with Windows - you just need to install it.
If it's database information you're caching, you might try your luck at setting up and SqlCacheDependency.
Otherwise, I would recommend not using the ASP.NET cache, and either find a 3rd party solution that uses a distributed caching scheme, that way all applications are using one cache, instead of 3 separate ones.
I'm not saying this is the best answer or even the right answer, its just what I did.
I have a series of ecommerce websites on separate servers and data centers that rely on pulling catalog data from a central backoffice website location and then caches them locally. In my first iteration of this I simply used GET requests that the central location could ping the corresponding consuming website to initiate its own cache refresh routine. I used SSL on each of the eCommerce servers as I already had that setup and could then have the backoffice web app send credentials via SSL GET to initiate the refresh securely.
At a later stage, we found it more efficient to use sockets instead on the backoffice where each consuming website would be a client and listen for changes in the data. The backoffice website could then communicate to its corresponding website when a particular account change and then communicate this very specifically. This approach is much more granular and we could update in small bits as needed as opposed to a large chunked update but this was definitely more complicated than our first try.

Self-installing memory cache - does it exist?

I've read about various cross-machine caching mechanisms (Redis, Velocity, nMemCached, etc...). They all seem to require a central machine to manage the cache.
Is there such a thing as a cache engine that self installs - e.g. if caching does not exist on the current subnet, it creates a node. If it does exist, it joins the machine to the caching pool?
Context: I have an app that deploys to around 100 users within the same subnet via ClickOnce. Each of these users access a resource via the WAN (across country and in some cases across the ocean) that performs very CPU-intensive computations and takes significant time to complete.
As a result, the app feels sluggish. I've done what I could to alleviate that by throwing long-lived queries onto separate threads. But that only takes you so far. I've added local caching (via a SQL Compact DB) which works pretty good, but most users access the similar information and together they exert a bit of pressure on the computation server. I think I can take it to the next level if I am able to ship an in-memory cache with my app that is able to seemlessly work with other machines to create a network wide caching mechanism.
You're the one who knows what will be the best, but having a "server app" that would coordinate the whole lot might be a good thing :
User1 asks Server "I need X".
Server tells User1 "Well, ask for it to DataBase"
User2 asks Server "I need X."
Server tells User2 "User1 got it."
...
User1 tells Server "I don't want X anymore."
You could also make some type of data "uncacheable" due to their volatile nature, or to avoid clogging one of the users' connection. Sure the server will get a lot of requests, but if you'd have to compare this with a broadcast across the network solution. If I didn't understand your problematic correctly, just write a comment, disregard this, and I'll remove the answer not to mislead SO Users.
If you don't want a master machine, or you don't want to rely on a specific network layout/installation, you could consider Peer to Peer as an option. WCF has native peer to peer support. Here is a link that looks somewhat relevant to your need: How To Design State Sharing In A Peer Network

Upload a file to multiple servers

I see a ton of questions about uploading multiple files, but none about uploading a single file to multiple servers, so here goes...
I have an ASP.NET app that will be running on two load balanced servers, and I would like to allow users to upload files and have them end up on both servers. What is the cleanest way to do this? I am using IIS 6 btw.
Some ideas that come to mind are:
1) Use a virtual directory that points to some shared location that both servers can access. Will there be any access issues if the application runs at Network Service? I'm assuming the application will need to run as a user account that exists on the shared location machine. How should the permissions be set for this?
2) It would be nice if I could via jQuery post the request to both of my servers, referencing them by their port numbers. Even though the servers are on the same domain, this violates the same origin policy, right?
Is there another solution I'm overlooking? How do other sites do this?
I think you want to consider this problem more carefully - having a pair (or more) of servers means that some of them will be offline some of the time (at least for occasional reboots).
Uploads when not all of the servers are online won't be able to be sent to all servers immediately, so you'd need either an intermediate server (which would be a point of failure unless it was highly available itself) or a queuing system to "remember" which files were where, and to transfer them when the relevant servers were restored.
Also, you'll want a backup system, and some way to add newly provisioned servers to your cluster. You will also want a way to monitor these files are the same in case they get out of sync. Your architecture needs a lot of careful thought. I don't have the answers :)
The cleanest approach is forwarding the files server-side, really. If you force two uploads via JavaScript, not only will you have to worry about working around XSS safeguards, but you'll also force the user to use their very limited upstream bandwidth twice for each file.
You shouldn't be exposing that kind of detail to the client anyway. The browser doesn't need to know where the file ends up, just who to send it to. If you keep that logic server-side, not only do you keep the details hidden (and thus less prone to errors and exploits), but you'll also get more control over the process. You can create a gateway service later that handles a multitude of back end storages and you can handle failing servers better. You can queue failed uploads and retry. All these come at a very low cost if you do them on the server side, but are a pain to be made to work reliably on the client side.
Keep back end logic to your back end. Load balancing should be hidden from the user, so there's no need to tell them where they are sending their files exactly. Make it optional, if you want, but hide the action from them. Just swallow the file on the gateway server (which can be either of the load balancing servers -- in fact, it should probably be load balanced, too, so it should work with either of them in place) and send it to the other servers from there. The transfer from server to server will probably be faster too.
Your best bet is definitely a NAS, if one is available -- a shared file system that is not specifically associated with any machine. Then you can focus on making the NAS highly available via a clustered frontend.
If that's not an option, you can use a virtual directory on each machine that points to one folder on one of the machines, but then you lose redundancy.
I'm faced with this same challenge at my work. My app is small but needs to be highly available, but there's no NAS in sight. So in each machine's web.config I place a list of all the UNC paths that the uploaded file should be stored. After uploading to a temp folder, I copy the file to each machine one by one. It's not perfect -- a machine could go down, in which case when it came up it might not have all the files (and the copy would be slowed by the hunt for the missing machine) -- but in my situation uploads are so infrequent that it's not worth improvement.
As others have mentioned, Javascript is right out. Upload once.
I have seen this problem solved with a NAS, using credentials for the app pool that can read/write files to that NAS. Make sure your NAS is setup for high availability to prevent single point of failure ie:hot swap w/ raid, multiple array controllers, power supplies..etc
You could also put folder monitoring software on the severs that keep certain directories in sync. I don't recommend this solution.

Categories

Resources