I have an Asp.Net webpage written in c#. This webpage is communicating with a host on a server. The server adress is actually hardcoded in my controller methods as
static PatientController()
{
//Create the HttpClient once and use it
_httpClient = new HttpClient();
_httpClient.BaseAddress = new Uri("http://localhost:9002/prom2etheus/v1/");
_patientList = new List<Patient>();
}
How can I configure the URI as a parameter, that a user can enter at the start of the UI? My problem is, that the host is running on a server, and my UI is running on the same server, but in a Docker container. So the IP of the host can change, and I don't want to hardcode the IP of the host in my controller method. Which is the better way to do?
It depends. If you want the url to persist, then storing it in a database or a file is a good idea.
On the other hand, if it is okay to propt user every time the app starts, it could be stored in memory. This would jave the added benefit that, it would be much faster to read/write since there's no IO. There could be othet storage options such as third party storage provides too. In both cases, you would have to think about thread safety.
Related
Using WCF, .NET 4.5, Visual Studio 2015, and want to use per-session instancing, not singleton. The services provided are to be full-duplex, over tcp.net.
Suppose I have two machines, A & B...
B as a client, connects to a "service" provided as a WCF service on same machine B, and starts talking to it, call it object “X”. It ALSO connects to another instance of the same service, call it object “Y”
A as a client, wants to connect to, and use, exact same objects B is talking to, objects “X” and “Y”, except now it’s remote-remote, not local-remote.
“X” and “Y” are actually a video servers, and both have “state”.
Can I do this? How, when I’m a client, do I specify WHICH service instance I want to connect to?
Obviously, on machine "B", I could kludge this by having the services just be front-ends with no "state", which communicate with some processes running on "B", but that would require I write a bunch of interprocess code, which I hate.
Machine B is expected to be running 100's of these "video server" instances, each one being talked to by a local master (singleton) service, AND being talked to by end-user machines.
I realize this question is a bit generic, but it also addresses a question I could not find asked, or answered, on the Internets.
I just thought of one possible, but kludge-y solution: since the master service is a singleton, when service instance "X" is created by the end-user, it could connect to the singleton master service, through a proxy to the singleton. Then, the singleton can talk back to instance "X" over a callback channel. Yeah, that would work! messy, but possible.
I'd still like to know if end user A and end user B can both talk to the same (non-singleton) service instance on machine C through some funky channel manipulation or something. As I understand the rules of WCF, this simply isn't possible. Perhaps maybe if you're hosting the service yourself, instead of IIS, but even then, I don't think it's possible?
I've faced the same problem and solved it by creating two service references, one for the local one for the remote. Let's call it LocalServiceClient and RemoteServiceClient.
In a class, create a property called Client (or whatever you like to call it):
public LocalServiceClient Client {
get {
return new LocalServiceClient();
}
}
Okay this is for only one of them. Just create another now, and set which one to use with a compiler flag:
#if DEBUG
public LocalServiceClient Client {
get {
return new LocalServiceClient();
}
}
#else
public RemoteServiceClient Client {
get {
return new RemoteServiceClient();
}
}
#endif
Instantiate any instances of your Client using var keyword, so it will be implicitly-typed, or just use Client directly:
var client = Client;
client.DoSomething...
//or
Client.DoSomething...
This way, when you are working locally, it will connect to the local service, and on release configuration (make sure you are on Release when publishing) it will compile for the remote one. Make sure you have the exact same signature/code for both services though at the WCF-side.
There are also methods that you can dynamically do it in code, or like in web.config, they would also work for sure, but they are usually an overkill. You probably need to connect to local one in debugging, and the remote one in production, and this is going to give you exactly what you need.
I'm attempting to recycle an app pool on IIS6 programmatically through a web application. I have searched all over the net and found a bunch of solutions (Most involving impersonation) but none of them seem to work. The most common error I get is E_ACCESSDENIED despite entering a valid username and password. Can anybody point me in the right direction?
The solution I use for this sort of thing (Where you're trying to run a process from ASP.NET that needs administrative privileges) is the following:
Write whatever you need done as a Self hosted WCF service. Preferably an Http REST Service, so it's easy to call (even using just a browser for testing)
Make sure you service is run using an administrator account. You can use the task scheduler to make sure the service is running at all times as well as run using an Administrator account.
Execute methods on the service from your ASP.NET application using a WCF Client
And it works all the time no matter what "process" I'm trying to run from within an ASP.NET application.
Now as far are the details (code) is concerned let me know if you need help. The code below
is the code you'd have in a console application in order to make it a self hosted WCF Service.
In this case it's an Http service listening on port 7654.
static void Main(string[] args)
{
var webServiceHhost = new WebServiceHost(typeof(AppCmdService), new Uri("http://localhost:7654"));
ServiceEndpoint ep = webServiceHhost.AddServiceEndpoint(typeof(AppCmdService), new WebHttpBinding(), "");
var serviceDebugBehavior = webServiceHhost.Description.Behaviors.Find<ServiceDebugBehavior>();
serviceDebugBehavior.HttpHelpPageEnabled = false;
webServiceHhost.Open();
Console.WriteLine("Service is running");
Console.WriteLine("Press enter to quit ");
Console.ReadLine();
webServiceHhost.Close();
}
AppCmdService is a WCF Service class that looks like this (in my case). In your case you probably don't need a response from your service. In my case I'm getting a Json response. The actual implementation of what it is you're trying to do will be different obviously. But I'm assuming you already have that piece worked out. So simply call a method of that class from here.
[ServiceContract]
public class AppCmdService
{
[WebGet(UriTemplate = "/GetCurrentExcutingRequests/?", ResponseFormat= WebMessageFormat.Json)]
[OperationContract]
public IEnumerable<ExecutingRequestJson> GetCurrentExcutingRequests()
{
return CurrentExecutingRequestJsonProvider.GetCurrentExecutingRequests("localhost");
}
}
On your ASP.NET side, you don't really need a WCF client. All you need is a way to make an http call to the service. So you can simply use HttpWebRequest to make the call out to your service, which in turn execute your process.
Hope all of this makes sense?
Maybe this SO question helps you. There are several solutions (also for IIS6):
Restarting (Recycling) an Application Pool
IMHO the best you could do is to decide to go with a concrete approach an then when you run into an exception, to ask a concrete question with the source code of your approach. Otherwise it's just very vage to answer your question.
We are currently working on an API for an existing system.
It basically wraps some web-requests as an easy-to-use library that 3rd party companies should be able to use with our product.
As part of the API, there is an event mechanism where the server can call back to the client via a constantly-running socket connection.
To minimize load on the server, we want to only have one connection per computer. Currently there is a socket open per process, and that could eventually cause load problems if you had multiple applications using the API.
So my question is: if we want to deploy our API as a single standalone assembly, what is the best way to fix our problem?
A couple options we thought of:
Write an out of process COM object (don't know if that works in .Net)
Include a second exe file that would be required for events, it would have to single-instance itself, and open a named pipe or something to communicate through multiple processes
Extract this exe file from an embedded resource and execute it
None of those really seem ideal.
Any better ideas?
Do you mean something like Net.TCP port sharing?
You could fix the client-side port while opening your socket, say 45534. Since one port can be opened by only one process, only one process at a time would be able to open socket connection to the server.
Well, there are many ways to solve this as expressed in all the answers and comments, but may be the simpler way you can use is just have global status store in a place accesible for all the users of the current machine (may be you might have various users logged-in on the machine) where you store WHO has the right to have this open. Something like a "lock" as is used to be called. That store can be a field in a local or intranet database, a simple file, or whatever. That way you don't need to build or distribute extra binaries.
When a client connects to your server you create a new thread to handle him (not a process). You can store his IP address in a static dictionary (shared between all threads).
Something like:
static Dictionary<string, TcpClient> clients = new Dictionary<string, TcpClient>();
//This method is executed in a thread
void ProcessRequest(TcpClient client)
{
string ip = null;
//TODO: get client IP address
lock (clients)
{
...
if (clients.ContainsKey(ip))
{
//TODO: Deny connection
return;
}
else
{
clients.Add(ip, client);
}
}
//TODO: Answer the client
}
//TODO: Delete client from list on disconnection
The best solution we've come up with is to create a windows service that opens up a named pipe to manage multiple client processes through one socket connection to the server.
Then our API will be able to detect if the service is running/installed and fall back to creating it's own connection for the client otherwise.
3rd parties can decide if they want to bundle the service with their product or not, but core applications from our system will have it installed.
I will mark this as the answer in a few days if no one has a better option. I was hoping there was a way to execute our assembly as a new process, but all roads to do this do not seem very reliable.
Can some one give me a best way to implement a daily job with .NET technology.
I have an asp.net application with the sqlserver database hosted in shared hosting, GODaddy in my instance.
My application is used to add / change the data in the database which is performing quite fairly at this time.
I got a new requirement to send some email alerts daily based on some data criteria that were stored in the database.
Initially I thought to write a windows service, but godaddy is not allowing to access the database other than its hosted applications.
Does someone has any idea to send alerts daily at 1:00AM?
Thanks in advance
See Easy Background Tasks in ASP.NET by Jeff Atwood.
Copy/paste from the link:
private static CacheItemRemovedCallback OnCacheRemove = null;
protected void Application_Start(object sender, EventArgs e)
{
AddTask("DoStuff", 60);
}
private void AddTask(string name, int seconds)
{
OnCacheRemove = new CacheItemRemovedCallback(CacheItemRemoved);
HttpRuntime.Cache.Insert(name, seconds, null,
DateTime.Now.AddSeconds(seconds), Cache.NoSlidingExpiration,
CacheItemPriority.NotRemovable, OnCacheRemove);
}
public void CacheItemRemoved(string k, object v, CacheItemRemovedReason r)
{
// do stuff here if it matches our taskname, like WebRequest
// re-add our task so it recurs
AddTask(k, Convert.ToInt32(v));
}
I haven't used GoDaddy for anything other than domain registration, so I have no experience with what you can or cannot do on their hosting platform. I also don't know what their support or knowledge base is like, but I'd say your best option is to ask GoDaddy what they recommend. Otherwise, you might keep implementing something that's technically feasible, but is blocked by the hosting company.
If it's not something that's a prime-time application, one quick and dirty thing to do is to have some kind of external bot calling a (secure) web page on the server that fires off the notification process. Not a real solution, but if this site is just a hobby of yours, it could get you by until you find something the host will allow.
Might also be a good time to find a new host, if this one is not meeting your requirements. There are lots of good ASP.NET hosts available these days.
You can use windows scheduler from the web server to schedule a stored procedure call that can send mail based on particular criteria.
osql.exe -S servername -d database -U username -P password -Q "EXEC spAlertOnCriteria"
References:
osql
Task Scheduler
Many hosting providers can request a URL for you every X minutes. I don't know if GoDaddy does, but if so, you could create an ASMX page that kicks off the job, and tell them to execute it automatically.
If they don't, one solution might be to fire off the job in a background thread at every page request. If you do that, make sure you put in code that limits it to running every X minutes or more (perhaps using a static variable or a database table) - read this story
If you can expose a service on the website hosting the application and database -- authenticated service, of course -- then you can hit that service remotely from any box with credentials, pull down the data, and send the mail that way.
This could be an automated process written as a Windows service, an application that is run under the Scheduler, or some button you push at 1:00 AM. Your pick.
Just because the app is the only thing that can access the database doesn't mean you can't expose the data in other ways.
Use either System.Timers, System.Threading to create a instance that is run at a predetermined time. Have that thread execute whatever the task is that you want... Make sure the code is thread safe!
I have a web application that you can use to import information from another site by giving it a url. It's been pointed out that you could use this feature to access a private site that is hosted on the same web server.
So...
How can I check that a given url is publicly accessible (whether on the same web server or somewhere different)?
FIX:
I ended up doing this:
protected static bool IsHostWithinSegment(string Host)
{
Ping pinger = new Ping();
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
PingOptions options = new PingOptions();
options.Ttl = 1;
PingReply reply = pinger.Send(Host, 1000, buffer, options);
return reply.Status == IPStatus.Success;
}
private static Uri BindStringToURI(string value)
{
Uri uri;
if (Uri.TryCreate(value, UriKind.Absolute, out uri))
return uri;
// Try prepending default scheme
value = string.Format("{0}://{1}", "http", value);
if (Uri.TryCreate(value, UriKind.Absolute, out uri))
return uri;
return null;
}
The only requirement of mine that it doesn't fulfil is that some installations of our product will exist alongside each other and you won't be able to import information across them - I suspect this will require using a proxy server to get an extrenal view of things but as it's not a requirement for my project I'll leave it for someone else.
-- I've just realised that this does entirely solve my problem since all the publicly accessible urls resolve to virtual or routable ips meaning they hop.
Run a traceroute (a series of pings with short TTL's to the address, if the firewall(s) is(are) one of the hops then it's visible from outside the organisation so should be acceptable.
System.Net.NetworkInformation has a ping class that should give you enough information for a tracert like routine.
This does sound like a big hole though, another approach should probably be considered. Preventing the machine that runs this prog. from accessing any other machine on the internal network may be better - a kind of internal firewall.
I've added a simple traceroute, since you like the concept:-
class Program
{
static void Main(string[] args)
{
PingReply reply = null;
PingOptions options = new PingOptions();
options.DontFragment = true;
Ping p = new Ping();
for (int n = 1; n < 255 && (reply == null || reply.Status != IPStatus.Success); n++)
{
options.Ttl = n;
reply = p.Send("www.yahoo.com", 1000, new byte[1], options);
if (reply.Address != null)
Console.WriteLine(n.ToString() + " : " + reply.Address.ToString());
else
Console.WriteLine(n.ToString() + " : <null>");
}
Console.WriteLine("Done.");
System.Console.ReadKey();
}
}
Should be good enough for a reliable local network.
Only two things spring to mind.
Have a trusted external server verify the visibility of the address (like an HTTP Proxy)
Check the DNS record on the site -- if it resolves to something internal (127.0.0.1, 10.*, 192.168.*, etc) the reject it -- of course, this might not work depending on how your internal network is set up
Not knowing if this is on a 3rd-party hosting solution or inside your/your company's internal network makes it hard to say which solution would be best; good luck.
EDIT: On second thought, I've canceled the second suggestion as it would still leave you open to DNS rebinding. I'll leave this here for that purpose, but I don't think it's a good idea.
That said, if you have some ability to control the network makeup for this server, then it should probably live in its own world, dedicated, with nothing else on its private network.
Check the URL address, and see if it matches your server address?
edit: or check against a range of addresses...
But all this does not answer the question: could the client access it?
Maybe some script in the browser to check that the url is accessible, and informing the server of the result.
But the user could edit the page, or simulate the result...
Have the client read the url contents and send it back to the server, instead of having the server fetch it?
Don't worry about the public accessibility of anyone else's web assets, that question does not have a definite answer in all cases. Just try not to compromise the access policy to your own (or your customer's etc.) web assets.
Use the existing access control mechanisms to control the web application's access. Don't just consult the access control mechanisms in order to duplicate them in the web application. That would be relying on the web application to refrain from using its full access - a false reliance if the web application ever gets compromised or if it simply has a bug in the access control duplication functionality. See http://en.wikipedia.org/wiki/Confused_deputy_problem.
Since the web application acts as a deputy of external visitors, treat it if you can as if it resided outside the internal network. Put it in the DMZ perhaps. Note that I'm not claiming that the solution is one of network configuration, I'm just saying that the solution should be at the same level at which it is solved if the visitor would try to access the page directly.
Make the web application jump through the same hoops the external visitor would have to jump. Let it fail to access resources the external visitors would have failed to access, too. Provide an error page that does not let the external visitor distinguish between "page not found" and "access denied".
The wininet dll has a function InternetCheckConnection
Allso look at InternetGetConnectedState
You are asking the wrong question. You should be asking, how can I limit access to a given URL so that only people on a certain network can access it?
The fact is, that you cannot test in the way that you wanted, because you likely do not have access to other sites on the same web server, in order to run a script that attempts to retrieve a URL. It is better to deny all access except the access that you wish to allow.
Perhaps a firewall could do this for you, but if you want more finegrained control, so that some URLs are wide open, and others are restricted, then you probably either need help from the web server software or you need to code this into the application that serves the restricted URLs.
If you are worried that your web application might be used to transfer data that comes from other servers protected by the same firewall which protects you, then you should change the application to disallow any URLs where the domain name portion of the URL resolves to an IP address in the range which is protected by the firewall. You can get that address range information from the firewall administrator.
This is only really a concern on in-house systems because in 3rd party data centers there should not be any private servers that don't have their own protection. In other words, if it is at your company, they may expect their firewall to protect the whole data center and that is reasonable, if a bit risky. But when you rent hosting from a 3rd party with a data center on the Internet, you have to assume that everything inside that data center is equally as potentially hostile as the stuff outside.