I have a Windows Service wrapping a WCF Service, which contains a WorkflowApplication, which runs Activities. I have also configured SQL Server 2008 Express (I know, it's approaching EOL, but the documentation explicitly states that only SQL Server 2005 or SQL Server 2008 are supported) to host the database and the connection works. To be even clearer: The entire process of the Activity completes and receives the return (I'm calling it via the WCF client wrapped in PowerShell).
The issue that I'm having is that I've configured SqlWorkflowInstanceStoreBehavior on the ServiceHost and SqlWorkflowInstanceStore on the WorkflowApplication. Neither of these throws a SQL exception but I think that the ServiceHost is taking precidence, as all that I can see is a singe entry on the LockOwnersTable.
Code from Windows Service:
this.obj = new ServiceHost(typeof(WorkflowService));
SqlWorkflowInstanceStoreBehavior instanceStoreBehavior = new SqlWorkflowInstanceStoreBehavior("Server=.\\SQL2008EXPRESS;Initial Catalog=WorkflowInstanceStore;Integrated Security=SSPI")
{
HostLockRenewalPeriod = TimeSpan.FromSeconds(5),
InstanceCompletionAction = InstanceCompletionAction.DeleteNothing,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.AggressiveRetry,
InstanceEncodingOption = InstanceEncodingOption.GZip,
RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(2)
};
this.obj.Description.Behaviors.Add(instanceStoreBehavior);
this.obj.Open();
Code from WCF Service/WorkflowApplication:
SqlWorkflowInstanceStore newSqlWorkflowInstanceStore = new SqlWorkflowInstanceStore("Server=.\\SQL2008EXPRESS;Initial Catalog=WorkflowInstanceStore;Integrated Security=SSPI")
{
EnqueueRunCommands = true,
HostLockRenewalPeriod = TimeSpan.FromSeconds(5),
InstanceCompletionAction = InstanceCompletionAction.DeleteNothing,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(5)
};
InstanceHandle workflowInstanceStoreHandle = newSqlWorkflowInstanceStore.CreateInstanceHandle();
CreateWorkflowOwnerCommand createWorkflowOwnerCommand = new CreateWorkflowOwnerCommand();
InstanceView newInstanceView = newSqlWorkflowInstanceStore.Execute(workflowInstanceStoreHandle, createWorkflowOwnerCommand, TimeSpan.FromSeconds(30));
newSqlWorkflowInstanceStore.DefaultInstanceOwner = newInstanceView.InstanceOwner;
// Now stage the WorkflowApplication, using the SQL instance.
AutoResetEvent syncEvent = new AutoResetEvent(false);
WorkflowApplication newWorkflowApplication = new WorkflowApplication(unwrappedActivity)
{
InstanceStore = newSqlWorkflowInstanceStore
};
Questions:
Does the ServiceHost SqlWorkflowInstanceStoreBehavior override the SqlWorkflowInstanceStore on the WorkflowApplication? If so, the obvious answer would be to remove the SqlWorkflowInstanceStoreBehavior on the ServiceHost; however, as inferred before, I fear that will prove fruitless, as the WorkflowApplication currently isn't logging anything (or even attempting to, from what I can tell).
ASAppInstanceService seems specific to WindowsServer. Is is possible to host those (for dev/pre-production) on Windows 10, if the ServiceHost (via Windows Service option) is always going to block/disable the WorkflowApplication from making the SQL calls?
Figued out the answer:
newWorkflowApplication.Persist();
Related
I have a basic producer app and a consumer app. if I run both and have both start consuming on their respective topics, I have a great working system. My thought was that if I started the producer and sent a message that I would be able to then start the consumer and have it pick up that message. I was wrong.
Unless both are up and running, I lose messages (or they do not get consumed).
my consumer app looks like this for comsuming...
Uri uri = new Uri("http://localhost:9092");
KafkaOptions options = new KafkaOptions(uri);
BrokerRouter brokerRouter = new BrokerRouter(options);
Consumer consumer = new Consumer(new ConsumerOptions(receiveTopic, brokerRouter));
List<OffsetResponse> offset = consumer.GetTopicOffsetAsync(receiveTopic, 100000).Result;
IEnumerable<OffsetPosition> t = from x in offset select new OffsetPosition(x.PartitionId, x.Offsets.Max());
consumer.SetOffsetPosition(t.ToArray());
IEnumerable<KafkaNet.Protocol.Message> msgs = consumer.Consume();
foreach (KafkaNet.Protocol.Message msg in msgs)
{
do some stuff here based on the message received
}
unless I have the code between the lines, it starts at the beginning every time I start the application.
What is the proper way to manage topic offsets so messages are consumed after a disconnect happens?
If I run
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic chat-message-reply-XXX consumer-property fetch-size=40000000 --from-beginning
I can see the messages, but when I connect my application to that topic, the consumer.Consume() does not pick up the messages it has not already seen. I have tried this with and without runing the above bat file to see if that makes any difference. When I look at the consumer.SetOffsetPosition(t.ToArray()) call (t specifically) it shows that the offset is the count of all messages for the topic.
Please help,
Set auto.offset.reset configuration in your ConsumerOptions to earliest. When the consumer group starts the consume messages, it will consume from the latest offset because the default value for auto.offset.reset is latest.
But I looked at kafka-net API now, it does not have a AutoOffsetReset property, and it seems pretty insufficient with its configuration in consumers. It also lacks documentation with method summaries.
I would suggest you use Confluent .NET Kafka Nuget package because it is owned by Confluent itself.
Also, why are calling GetTopicOffsets and setting that offset back again in consumer. I think when you configure your consumer, you should just start reading messages with Consume().
Try this:
static void Main(string[] args)
{
var uri = new Uri("http://localhost:9092");
var kafkaOptions = new KafkaOptions(uri);
var brokerRouter = new BrokerRouter(kafkaOptions);
var consumerOptions = new ConsumerOptions(receivedTopic, brokerRouter);
var consumer = new Consumer(consumerOptions);
foreach (var msg in consumer.Consume())
{
var value = Encoding.UTF8.GetString(msg.Value);
// Process value here
}
}
In addition, enable logs in your KafkaOptions and ConsumerOptions, they will help you a lot:
var kafkaOptions = new KafkaOptions(uri)
{
Log = new ConsoleLog()
};
var consumerOptions = new ConsumerOptions(topic, brokerRouter)
{
Log = new ConsoleLog()
});
I switched over to use Confluent's C# .NET package and it now works.
I need to collect following two informations from WebRole running IIS-8 on Azure.
Number of requests queued in IIS
Number of requests current being processed by worker
Since we are on Azure cloud service, I believe it would be better to stick together with default IIS configuration provided by Azure.
Approach 1: Use WorkerProcess Request Collection
public void EnumerateWorkerProcess()
{
ServerManager manager = new ServerManager();
foreach (WorkerProcess proc in manager.WorkerProcesses)
{
RequestCollection req = proc.GetRequests(1000);
Debug.WriteLine(req.Count);
}
}
Cons:
Requires RequestMonitor to be enabled explicitly in IIS.
Approach 2: Use PerformanceCounter class
public void ReadPerformanceCounter()
{
var root = HostingEnvironment.MapPath("~/App_Data/PerfCount.txt");
PerformanceCounter counter = new PerformanceCounter(#"ASP.NET", "requests current", true);
float val = counter.NextValue();
using (StreamWriter perfWriter = new StreamWriter(root, true))
{
perfWriter.WriteLine(val);
}
}
Cons:
Requires higher privilege than currently running IIS process.
P.S. There has been a four years old SO post but not answered well.
I have win service that work with MQ.
But i want that it works using ssl channel and database with public/private keys(for that)
May you explain me how to do it.
P.S. I'm not very good at MQ
now i connect to MQ using this code
MQEnvironment.Hostname = ConfigurationManager.AppSettings["HostnameIN"];
MQEnvironment.Channel = ConfigurationManager.AppSettings["ChannelIN"];
MQEnvironment.Port = int.Parse(ConfigurationManager.AppSettings["PortIN"]);
Environment.SetEnvironmentVariable("MQCCSID", ConfigurationManager.AppSettings["MQCCSID"]);
var mqQueueManagerName = ConfigurationManager.AppSettings["QueueManagerNameIN"];
var mqQueueName = ConfigurationManager.AppSettings["QueueNameIN"];
const int openOptions = MQC.MQOO_BROWSE | MQC.MQOO_INPUT_AS_Q_DEF;
var qMgr = new MQQueueManager(mqQueueManagerName);
var getOptions = new MQGetMessageOptions();
and get all messages using this
using (var mqQueue = qMgr.AccessQueue(mqQueueName, openOptions))
{
try
{
//while (mqQueue.CurrentDepth>0)
while (true)
{
var message = new MQMessage();
//message.Version = 2;
getOptions.Options = MQC.MQGMO_WAIT | MQC.MQGMO_BROWSE_NEXT;
mqQueue.Get(message, getOptions);
mqMessages.Add(message);
}
}
In order to set up MQ to use SSL on the channel you're using, you don't need to make any application changes at all - you simply need to configure the channel you're using on the queue manager to require SSL. The libraries within the client, JVM, and the queue manager will handle establishing that secure connection for you. So in theory all you need to do is make the MQSC/MQ Explorer changes which will configure SSL on the channel.
Recommend you read the following page in the IBM knowledge center. It provides a number of scenarios for various methods of connecting a client securely to the queue manager:
http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.sce.doc/q014220_.htm
I am trying to write an applicaiton that will run as a scheduled task on the vCenter server and will monitor the current time of each host in my cluster. The time is set by NTP but I am seeing VM servers running on these hosts going out by up to a minute and need to monitor what happens.
My hosts are running ESXi v5.1
According to the documentation (http://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/vim.host.DateTimeSystem.html#queryDateTime) there is a method that gets the current DateTime of a host - QueryDateTime().
I am struggling to get this to work though.... my sample code is below. It always complains about the QueryDateTime method not existing!
This is likely me not understanding the SDK, but I can't figure this out.....
VimClient vimClient = new VimClient();
vimClient.Connect(vcenter.ServiceURL);
vimClient.Login(vcenter.Username, vcenter.Password);
List<EntityViewBase> hosts = vimClient.FindEntityViews(typeof(VMware.Vim.HostSystem), null, null, null);
foreach (HostSystem host in hosts)
{
Console.WriteLine(host.Config.Network.Vnic[0].Spec.Ip.IpAddress);
HostConfigManager hostConfigManager = host.ConfigManager;
HostDateTimeSystem hostDateTimeSystem = hostConfigManager.DateTimeSystem;
DateTime hostDateTime = hostDateTimeSystem.QueryDateTime();
}
You should try this.
foreach (HostSystem host in vmHosts)
{
HostConfigManager hostConfigManager = host.ConfigManager;
HostDateTimeSystem hostDateTimeSystem = (HostDateTimeSystem) vimClient.GetView(hostConfigManager.DateTimeSystem, null));
DateTime hostDateTime = hostDateTimeSystem.QueryDateTime();
}
I'm developing a WCF service running on IIS and I need to count the messages i = for each private queue in MSMQ. The fastest way seems to be the powershell method.
The benchmark is here:
http://www.codeproject.com/Articles/346575/Message-Queue-Counting-Comparisions)
When debugging on Visual Studio 2012, it works great but when deployed on my local IIS 7.5 server, it returns 0.
Here is the method I'm using:
private int GetPowerShellCount()
{
return GetPowerShellCount(".\\private$\pingpong", Environment.MachineName, "", "");
}
private int GetPowerShellCount(string queuePath, string machine,string username, string password)
{
var path = string.Format(#"\\{0}\root\CIMv2", machine);
ManagementScope scope;
if (string.IsNullOrEmpty(username))
{
scope = new ManagementScope(path);
}
else
{
var options = new ConnectionOptions {Username = username, Password = password};
scope = new ManagementScope(path, options);
}
scope.Connect();
if (queuePath.StartsWith(".\\")) queuePath=queuePath.Replace(".\\",string.Format("{0}\\",machine));
string queryString = String.Format("SELECT * FROM Win32_PerfFormattedData_msmq_MSMQQueue");
var query = new ObjectQuery(queryString);
var searcher = new ManagementObjectSearcher(scope, query);
IEnumerable<int> messageCountEnumerable =
from ManagementObject queue in searcher.Get()
select (int)(UInt64)queue.GetPropertyValue("MessagesInQueue");
//IEnumerable<string> messageCountEnumerable =
// from ManagementObject queue in searcher.Get()
// select (string)queue.GetPropertyValue("Name");
var x = messageCountEnumerable.First();
return x;
}
please note that I'm not using the user/pass params so it's all local (WCF service and MSMQ on the same machine).
Why it's returning 0 when deployed to IIS?
What do you think I should try out?
Your problem might be related to IIS specific behavior.
Have a look at Tom Hollander's blog: MSMQ, WCF and IIS: Getting them to play nice (Part 1)
In general, message queues can be called whatever you want. However
when you are hosting your MSMQ-enabled service in IIS 7 WAS, the queue
name must match the URI of your service's .svc file. In this example
we'll be hosting the service in an application called MsmqService
with an .svc file called MsmqService.svc, so the queue must be
called MsmqService/MsmqService.svc.