I am trying to write an applicaiton that will run as a scheduled task on the vCenter server and will monitor the current time of each host in my cluster. The time is set by NTP but I am seeing VM servers running on these hosts going out by up to a minute and need to monitor what happens.
My hosts are running ESXi v5.1
According to the documentation (http://www.vmware.com/support/developer/vc-sdk/visdk400pubs/ReferenceGuide/vim.host.DateTimeSystem.html#queryDateTime) there is a method that gets the current DateTime of a host - QueryDateTime().
I am struggling to get this to work though.... my sample code is below. It always complains about the QueryDateTime method not existing!
This is likely me not understanding the SDK, but I can't figure this out.....
VimClient vimClient = new VimClient();
vimClient.Connect(vcenter.ServiceURL);
vimClient.Login(vcenter.Username, vcenter.Password);
List<EntityViewBase> hosts = vimClient.FindEntityViews(typeof(VMware.Vim.HostSystem), null, null, null);
foreach (HostSystem host in hosts)
{
Console.WriteLine(host.Config.Network.Vnic[0].Spec.Ip.IpAddress);
HostConfigManager hostConfigManager = host.ConfigManager;
HostDateTimeSystem hostDateTimeSystem = hostConfigManager.DateTimeSystem;
DateTime hostDateTime = hostDateTimeSystem.QueryDateTime();
}
You should try this.
foreach (HostSystem host in vmHosts)
{
HostConfigManager hostConfigManager = host.ConfigManager;
HostDateTimeSystem hostDateTimeSystem = (HostDateTimeSystem) vimClient.GetView(hostConfigManager.DateTimeSystem, null));
DateTime hostDateTime = hostDateTimeSystem.QueryDateTime();
}
Related
I have a basic producer app and a consumer app. if I run both and have both start consuming on their respective topics, I have a great working system. My thought was that if I started the producer and sent a message that I would be able to then start the consumer and have it pick up that message. I was wrong.
Unless both are up and running, I lose messages (or they do not get consumed).
my consumer app looks like this for comsuming...
Uri uri = new Uri("http://localhost:9092");
KafkaOptions options = new KafkaOptions(uri);
BrokerRouter brokerRouter = new BrokerRouter(options);
Consumer consumer = new Consumer(new ConsumerOptions(receiveTopic, brokerRouter));
List<OffsetResponse> offset = consumer.GetTopicOffsetAsync(receiveTopic, 100000).Result;
IEnumerable<OffsetPosition> t = from x in offset select new OffsetPosition(x.PartitionId, x.Offsets.Max());
consumer.SetOffsetPosition(t.ToArray());
IEnumerable<KafkaNet.Protocol.Message> msgs = consumer.Consume();
foreach (KafkaNet.Protocol.Message msg in msgs)
{
do some stuff here based on the message received
}
unless I have the code between the lines, it starts at the beginning every time I start the application.
What is the proper way to manage topic offsets so messages are consumed after a disconnect happens?
If I run
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic chat-message-reply-XXX consumer-property fetch-size=40000000 --from-beginning
I can see the messages, but when I connect my application to that topic, the consumer.Consume() does not pick up the messages it has not already seen. I have tried this with and without runing the above bat file to see if that makes any difference. When I look at the consumer.SetOffsetPosition(t.ToArray()) call (t specifically) it shows that the offset is the count of all messages for the topic.
Please help,
Set auto.offset.reset configuration in your ConsumerOptions to earliest. When the consumer group starts the consume messages, it will consume from the latest offset because the default value for auto.offset.reset is latest.
But I looked at kafka-net API now, it does not have a AutoOffsetReset property, and it seems pretty insufficient with its configuration in consumers. It also lacks documentation with method summaries.
I would suggest you use Confluent .NET Kafka Nuget package because it is owned by Confluent itself.
Also, why are calling GetTopicOffsets and setting that offset back again in consumer. I think when you configure your consumer, you should just start reading messages with Consume().
Try this:
static void Main(string[] args)
{
var uri = new Uri("http://localhost:9092");
var kafkaOptions = new KafkaOptions(uri);
var brokerRouter = new BrokerRouter(kafkaOptions);
var consumerOptions = new ConsumerOptions(receivedTopic, brokerRouter);
var consumer = new Consumer(consumerOptions);
foreach (var msg in consumer.Consume())
{
var value = Encoding.UTF8.GetString(msg.Value);
// Process value here
}
}
In addition, enable logs in your KafkaOptions and ConsumerOptions, they will help you a lot:
var kafkaOptions = new KafkaOptions(uri)
{
Log = new ConsoleLog()
};
var consumerOptions = new ConsumerOptions(topic, brokerRouter)
{
Log = new ConsoleLog()
});
I switched over to use Confluent's C# .NET package and it now works.
I'm currently using in my project TuesPechkin version 2.1.1, and also TuesPechkin.Wkhtmltox.AnyCPU v0.12.4.1
This is some of my code:
byte[] result = null;
try
{
var globalSettings = CreateGlobalSettings(portraitMode);
var objectSettings = CreateObjectSettings(websiteUrl, urlParameters);
var document = new HtmlToPdfDocument
{
GlobalSettings = globalSettings
};
document.Objects.Add(objectSettings);
CreateEventLog.CreateInformationLog("Ready to convert PDF");
result = Converter.Convert(document);
CreateEventLog.CreateInformationLog(result == null
? "Conversion failed using the Pechkin library"
: "PDF conversion finished");
I run this code in 3 different environments:
On my local machine it runs fine and it generates the file in 3 seconds.
On one of my servers (let's call it Server A) it runs fine and it generates the file in 3 seconds.
On the other of my servers (let's call it Server B) it holds for 1min (for some reason I don't understand) during the Converter.Convert part, and after that minute it returns null.
Server A and Server B have the same setup (CPU, RAM, etc)
There's no peak increase on Server B during conversion.
Any suggestions/ideas?
I found what the issue is.
The URL I'm trying to convert is in a Presentation Layer, which is deployed in a separate server. Pechkin converter is in a Business Layer.
In Server A, I can access the URL from the Business Server.
In Server B, I cannot access the URL from the Business Server.
This is probably some firewall exception that needs to be created.
It would be nice though to have TuesPechkin, returning an error saying it cannot access the URL.
It is important to check how you get the convert, dispose issue may cause problem
Just check code form here
public static IConverter GetConverter()
{
lock (Locker)
{
if (converter != null)
{
return converter;
}
var tempFolderDeployment = new TempFolderDeployment();
var winAnyCpuEmbeddedDeployment = new WinAnyCPUEmbeddedDeployment(tempFolderDeployment);
IToolset toolSet;
if (HostingEnvironment.IsHosted)
{
toolSet = new RemotingToolset<PdfToolset>(winAnyCpuEmbeddedDeployment);
}
else
{
toolSet = new PdfToolset(winAnyCpuEmbeddedDeployment);
}
converter = new ThreadSafeConverter(toolSet);
}
return converter;
}
I have a Windows Service wrapping a WCF Service, which contains a WorkflowApplication, which runs Activities. I have also configured SQL Server 2008 Express (I know, it's approaching EOL, but the documentation explicitly states that only SQL Server 2005 or SQL Server 2008 are supported) to host the database and the connection works. To be even clearer: The entire process of the Activity completes and receives the return (I'm calling it via the WCF client wrapped in PowerShell).
The issue that I'm having is that I've configured SqlWorkflowInstanceStoreBehavior on the ServiceHost and SqlWorkflowInstanceStore on the WorkflowApplication. Neither of these throws a SQL exception but I think that the ServiceHost is taking precidence, as all that I can see is a singe entry on the LockOwnersTable.
Code from Windows Service:
this.obj = new ServiceHost(typeof(WorkflowService));
SqlWorkflowInstanceStoreBehavior instanceStoreBehavior = new SqlWorkflowInstanceStoreBehavior("Server=.\\SQL2008EXPRESS;Initial Catalog=WorkflowInstanceStore;Integrated Security=SSPI")
{
HostLockRenewalPeriod = TimeSpan.FromSeconds(5),
InstanceCompletionAction = InstanceCompletionAction.DeleteNothing,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.AggressiveRetry,
InstanceEncodingOption = InstanceEncodingOption.GZip,
RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(2)
};
this.obj.Description.Behaviors.Add(instanceStoreBehavior);
this.obj.Open();
Code from WCF Service/WorkflowApplication:
SqlWorkflowInstanceStore newSqlWorkflowInstanceStore = new SqlWorkflowInstanceStore("Server=.\\SQL2008EXPRESS;Initial Catalog=WorkflowInstanceStore;Integrated Security=SSPI")
{
EnqueueRunCommands = true,
HostLockRenewalPeriod = TimeSpan.FromSeconds(5),
InstanceCompletionAction = InstanceCompletionAction.DeleteNothing,
InstanceLockedExceptionAction = InstanceLockedExceptionAction.BasicRetry,
RunnableInstancesDetectionPeriod = TimeSpan.FromSeconds(5)
};
InstanceHandle workflowInstanceStoreHandle = newSqlWorkflowInstanceStore.CreateInstanceHandle();
CreateWorkflowOwnerCommand createWorkflowOwnerCommand = new CreateWorkflowOwnerCommand();
InstanceView newInstanceView = newSqlWorkflowInstanceStore.Execute(workflowInstanceStoreHandle, createWorkflowOwnerCommand, TimeSpan.FromSeconds(30));
newSqlWorkflowInstanceStore.DefaultInstanceOwner = newInstanceView.InstanceOwner;
// Now stage the WorkflowApplication, using the SQL instance.
AutoResetEvent syncEvent = new AutoResetEvent(false);
WorkflowApplication newWorkflowApplication = new WorkflowApplication(unwrappedActivity)
{
InstanceStore = newSqlWorkflowInstanceStore
};
Questions:
Does the ServiceHost SqlWorkflowInstanceStoreBehavior override the SqlWorkflowInstanceStore on the WorkflowApplication? If so, the obvious answer would be to remove the SqlWorkflowInstanceStoreBehavior on the ServiceHost; however, as inferred before, I fear that will prove fruitless, as the WorkflowApplication currently isn't logging anything (or even attempting to, from what I can tell).
ASAppInstanceService seems specific to WindowsServer. Is is possible to host those (for dev/pre-production) on Windows 10, if the ServiceHost (via Windows Service option) is always going to block/disable the WorkflowApplication from making the SQL calls?
Figued out the answer:
newWorkflowApplication.Persist();
I need to collect following two informations from WebRole running IIS-8 on Azure.
Number of requests queued in IIS
Number of requests current being processed by worker
Since we are on Azure cloud service, I believe it would be better to stick together with default IIS configuration provided by Azure.
Approach 1: Use WorkerProcess Request Collection
public void EnumerateWorkerProcess()
{
ServerManager manager = new ServerManager();
foreach (WorkerProcess proc in manager.WorkerProcesses)
{
RequestCollection req = proc.GetRequests(1000);
Debug.WriteLine(req.Count);
}
}
Cons:
Requires RequestMonitor to be enabled explicitly in IIS.
Approach 2: Use PerformanceCounter class
public void ReadPerformanceCounter()
{
var root = HostingEnvironment.MapPath("~/App_Data/PerfCount.txt");
PerformanceCounter counter = new PerformanceCounter(#"ASP.NET", "requests current", true);
float val = counter.NextValue();
using (StreamWriter perfWriter = new StreamWriter(root, true))
{
perfWriter.WriteLine(val);
}
}
Cons:
Requires higher privilege than currently running IIS process.
P.S. There has been a four years old SO post but not answered well.
I have an application, that needs to get the last shutdown time. I have used EventLog class to get the shutdown time. I have separate class file that is designed to read/write event log. ReadPowerOffEvent function is intended to get the power off event.
public void ReadPowerOffEvent()
{
EventLog eventLog = new EventLog();
eventLog.Log = logName;
eventLog.MachineName = machineName;
if (eventLog.Entries.Count > 0)
{
for (int i = eventLog.Entries.Count - 1; i >= 0; i--)
{
EventLogEntry currentEntry = eventLog.Entries[i];
if (currentEntry.InstanceId == 1074 && currentEntry.Source=="USER32")
{
this.timeGenerated = currentEntry.TimeGenerated;
this.message = currentEntry.Message;
}
}
}
}
But whenever it tries to get the event entry count, it throws an IOException saying "The Network Path Not found". I tried to resolve, but I failed. Please help me out...
I think you sent wrong Log name, this worked for me
EventLog myLog = new EventLog();
myLog.Log = "System";
myLog.Source = "User32";
var lastEntry = myLog;
EventLogEntry sw;
for (var i = myLog.Entries.Count -1 ; i >=0; i--)
{
if (lastEntry.Entries[i].InstanceId == 1074)
sw = lastEntry.Entries[i];
break;
}
}
You have to have the "Remote Registry" service running on your machine (or the machine you want to run this app on). I suspect that this service in set to manual start on your machine. You may have to change the setting on this service to automatic.
If this app is going to be running on other machines, you may want to put some logic into your app to check to make sure this service is running first. If it isn't then you will need to start it up through your app.
Note:
The "Remote Registry" service enables remote users to modify registry setting on your computer. By default, the "Startup type" setting for the "Remote Registry" service may be set to "Automatic" or "Manual" which is a security risk for a single user (or) notebook PC user.
So, to make sure that only users on your computer can modify the system registry disable this "Remote Registry" service.