I have been working on a Lightswitch 2012 (HTML 5) project two of it key requirements are report generation and triggered based emails to users.
Since lightswitch does not support reports and best way is to use ASP.NET, i used to generate reports, But now i am kind of stuck about how can i generate triggered based emails meaning like each night at 00:00 there should be some query run against database to generate emails to users, some options that i have are,
1) SQL jobs
2) WCF RIA Services
3) Console Based separate application
But client wont allow me any where near their servers so need a solution with minimal configuration on server but it must be reliable as well.
So please some advice in this regard is needed. Thanks!
Server is running Windows Server 2012 and they have there own mail server that i have to utilize.
Related
I have the following architecture: local C# desktop app with SQL Server Express and a central web app with SQL Server Enterprise, with db per local app.
I need to sync those two databases semi-automatically, but with some controls (and info) from an local desktop and web apps.
For example, I want for user to see when sync start, when it ended, etc. Also, if synced ok or there was an error.
What is the best solution for this problem ?
I see that there are two technologies: merge replication (works only on SQL Server level with no app interference), and discontinued Sync framework, with lots of problems. Should I create my app module for this?
Update: just to note, a web app will be fine without this, however, the reason for this sync is that as I need to have offline functionality, i.e., when network is offline app must work regulary.
Let me start by asking for your patience as I am ignorant of SSRS and not so sharp on the WCF technology that I intend on using to perform the task I want.
I have a SQL Server Express instance with data I want to get daily reports out of. I can not schedule tasks on SQL Server (and the reports server I think I can in the full blown but not in Express) , but I can schedule via windows task.
So I am writing an application that will use predefined rdl's to generate my reports and send the reports back via email. I am not sure how rdl's data connection is set from my C# application - I created rdl's in its own project, but it looks like I need to do rdlc conversions for my application.
What I would like to do is query the database via the reports and then send those reports out in an email to a set group of recipients (pulled from their email field in the users=reports result table.
[FYI] I also have a web application that might need to generate these same reports.
Now on to my question:
I am not sure if I even need WCF to do anything here, as it looks like from this post Creating a PDF from RDLC report that I can simply run that code and point to my data source and the bytes will be returned to me (so far fine for a desktop quick and dirty getter - I think)
Now is the same functionality possible in a web application - it would be nice to reuse the same code.
Can some one provide me with some proper guidance for this kind of functionality with the technologies involved - so I can code something proper and maintain KISS principle.
Our web application is developed by 2 teams. One team works on the client side, with it's own Branch for development, and the other works on the server side, also with it's own development branch. The client and the server are running separately, each one as a website on a different port. The websites are hosted over IIS Express during development, and in production they will run over IIS.
Our ideal situation is that each team can develop completely separately and whenever a develop session is over, both teams merge their change-set to a common Branch in order to integrate, than each team merges back to their development branch, and continues.
In order for a full separation, We have x2 SERVER projects, one to handle the real HTTP requests and another one, a "Stub server" Which responses to all the clients HTTP requests with default values, just in order so that the Client side team can test their code without being dependent on the functionality of the server.
The problem is that both the "Stub server" and the real server and using the same Port which the Client side project is directed to.
This causes many annoying mistakes (mostly for the Server side team) of running the application with the "Stub server" instead of the real one, during reviews, tests etc. The only solution for us is to manually create a virtual directory for the real web server project every time before running / or after finding out we were running the wrong server.
Is there a smarter solution to overcome this annoying problem? That would improve our lives!
If anything I said was foolish / not clear please correct me (I'm new to this), or ask for more details, I'll be glad!
Thanks for helpers!
I believe your problem is more related to build automation then server configuration. You should really keep the stub server and the real server into separate ports, and change that port during some kind of build process of your client.
If you are using AngularJS, then I suggest you to create steps into the build process of your client application using common tools like gulp or grunt. You could create build processes that will set a global variable or modify a constant (e.g. the API endpoint) and name them local testing (pointing your client to the stub server) and integration (for the real server).
Please note that you can easily integrate those build processes into Visual Studio, making them part of your global debug/build process.
Here it is a simple gulp task useful for replacing text inside any file: https://www.npmjs.com/package/gulp-replace
I'd like to know my options for the following scenario:
I have a C# winforms application (developed in VS 2010) distributed to a number of offices within the country. The application communicates with a C# web service which lies on a main server at a separate location and there is one database (SQL Server 2012) at a further location. (All servers run Windows Server 2008)
Head Office (where we are) utilize the same front-end to manage certain information on the database which needs to be readily available to all offices - real-time. At the same time, any data they change needs to be readily available to us at Head Office as we have a real-time dashboard web application that monitors site-wide statistics.
Currently, the users are complaining about the speed at which the application operates. They say it is really slow. We work in a business-critical environment where every minute waiting may mean losing a client.
I have researched the following options, but do not come from a DB background, so not too sure what the best route for my scenario is.
Terminal Services/Sessions (which I've just implemented at Head Office and they say it's a great improvement, although there's a terrible lag - like remoting onto someones desktop, which is not nice to work on.)
Transactional Replication (Sounds like something quite plausible for my scenario, but would require all offices to have their own SQL server database on their individual servers and they have a tendency to "fiddle" and break everything they're left in charge of!) Wish we could take over all their servers, but they are franchises so have their own IT people on site.)
I've currently got a whole lot of the look-up data being cached on start-up of the application but this too takes 2-3 minutes to complete which is just not acceptable!
Does anyone have any ideas?
With everything running through the web service, there is no need for additional SQL Servers to be deployed local to the client. The WS wouldn't be able to communicate with these databases, unless the WS was also deployed locally as well.
Before suggesting any specific improvements, you need to benchmark where your bottlenecks are occurring. What is the latency between the various clients and the web service, and then from the web service and the database? Does the database show any waiting? Once you know the worst case scenario, improve that, and then work your way down.
Some general thoughts, though:
Move the WS closer to the database
Cache the data at the web service level to save on DB calls
Find the expense WS calls, and try to optimize the throughput
If the lookup data doesn't change all that often, use a local copy of SQL CE to cache that data, and use the MS Sync Framework to keep the data synchronized to the SQL Server
Use SQL CE for everything on the client computer, and use a background process to sync between the client and WS
UPDATE
After your comment, two additional thoughts. If your web service payload(s) is/are large, you can try adding compression on the web service (if it hasn't already been implemented).
You can also update your client to do the WS calls asynchronously, either in a thread or if you are using .NET 4.5 using async/await. This would at least allow the client to use the UI, but wouldn't necessary fix any issues with data load times.
I'm going to develop a POS system for medium scale company
and the requirement for me is to make all data on time for all of their branches
while in my mind, move the server from local to web would solve this problem
but, i never done any online server for window application
may i know what is the best option for use as secure database ?
such as SQL can handle this well ?
i tried to google but all of the result return is not what i want
may i know what will you do when you facing this problem ?
my knowledge on coding is just VB and CS
also SQL for database
i would like to learn new if there is better option
i hope it is impossible to access by anonymous and it is store secure at back-end only
What you probably want to do is create a series of services exposed on the internet and accessed by your application. All database access would be mediated by these services. For security you would probably want to build them in WCF and expose them through IIS. Then your Windows application would just call these services for most of its processing.
If you design it properly you could also have it work with a local database as well so that it could work in a disconnected manner if, for example, your servers go down.
Typically you don't move the server off of the site premises.
The problem is that they will go completely down in the event your remote server is inaccessible. Things that can cause this are internet service interruption (pretty common), remote server overloaded (common enough), basically anything that can stop the traffic between the store location and your remove server will bring them to their knees. The first time this happens they'll scream. The second time and they'll want your head due to the lost sales.
Instead, leave a sql server at each location. Set up a master sql server somewhere. Then set up a VPN connection between the stores and this central office. Finally, have the store sql boxes do merge replication with the central office. Incidentally, don't use the built in replication, but an off the shelf product which specializes in replicating sql server. The built in one can be difficult to learn.
In the event their internet connection goes dark the individual stores will still be able to function. It will also remain performant as all of the desktop app traffic is purely to the local sql box.
Solving replication errors is much easier than dealing with a flaky ISP.
I would recommend you to check Viravis Platform out.
It is an application platform that also can be used just as an online database for any .NET client with the provided SDK. It has its own generic windows and web clients and some custom web solutions for some specific applications.
You may be using it as a complete solution or as a secure online database backend.