Blogs

Digital Brand Management for KIDCO

[vc_row][vc_column][vc_column_text] YISolutions, an Managed IT Services & Cyber Security Solution company based in Karachi, Pakistan, is proud to announce that KIDCO (Baby Care) has selected YISolutions as their Digital Brand Management & Cloud Solution Provider. This is the 1st Time, our Digital Brand Management & Cloud Solution selected by KIDCO (Baby Care)    Services Provided; Cloud Server Cloud MailChannel Mgmt of eMail Server Mgmt of Server Digital Brand Mgmt Brand Awareness Build Brand Trust   [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column][vc_images_carousel images=”5457,5458,5459,5460,5461,5462″ img_size=”large”][/vc_column][/vc_row][vc_row][vc_column][vc_column_text] KIDCO was incorporated in Pakistan in 1996, with a vision of becoming the finest baby care product manufacturer in Pakistan. At Kidco, we produce an entire product range developed on high-end German & Japanese machinery. The Kidco brand is known for supplying high quality baby care products to Pakistani mothers for over 2 decades.   YISolutions is a key player in IT Consultancy, Cyber Security and Managed IT Services. YISolutions was established in 2002-2003 and our Pakistan Registered Office is located in Karachi at Defense. And our Principle Registered office is located in the US at Herndon. For more information please email us at  support@yi.com.pk   [/vc_column_text][/vc_column][/vc_row]
Read more

ADVANTAGES / DISADVANTAGES OF OpenVZ-KVM-Cloud

  OpenVZ KVM Cloud Disk Resizekvm and cloud can only increase in size Yes No No Rebootless UpgradesChanges to kernel, disk, or memory do not require reboots Yes No No Lowest OverheadThe shared kernel on openvz has lower overhead Yes No No Support ALL OSOpenVZ only supports linux operating systems No Yes Yes True VM IsolationOnly KVM and Cloud based virtualization provides a true VM isolation No Yes Yes Disk CacheDisk cache supported Yes Yes Yes Swap SpaceSwap space is supported No Yes Yes OpenVZ OpenVz is is a linux based virtualization platform based on the Linux Kernel. OpenVZ allows a physical server to run multiple isolated operating system instances known as containers. OpenVz can only run linux based operating systems such as Centos, Fedora, Gentoo, and Debian. One disadvantage of OpenVZ users are not able to make any kernel modifications. All virtual servers have to get along with the kernel version the host runs on. However because it doesn’t have the overhead of a true hypervisor it is very fast and efficient over Kvm, Xen, KVM, VMware and Cloud.   KVM, Xen, VMware The next three platforms can be grouped into the same category because they work almost identically. The differences that they do have will not be noticeable on the virtual server and the end user. All three platforms provide true virtualization resources are not shared between the host kernel or other virtual servers. Almost any operating system can run on three platforms. We choose to use the KVM platform because it supported by the Centos operating system that we use as the host OS.   Cloud Cloud is the new term companies large and small are kicking around. There is no true definition on what a cloud is or how it is supposed to be designed. And in our opinion the term “cloud” that applies to VPS hosting is no different then a VPS that has failover, redundancy or backup. So coining a the new term “cloud” and the extra hype is completely not necessary. A typical cloud setup runs on the KVM, XEN, or VMware platform. The difference is on the type of hardware that is used. instead of having storage located on the host server all data is stored on much larger SAN/NAS array with multiple disks. Raid is used to prevent disk failure in the physical array. And in the best case scenario a second array is added incase the entire array fails. The host server accesses the storage via Ethernet. in the event of a host server failure spare host servers are on standby to startup when needed. Downtime during the failure would be a reboot of your operating system. At YIsolutions our cloud setup runs on the KVM platform with two storage arrays. The advantage of cloud is a truly redundant environment. All aspects of hardware failure are 100% covered. if you are looking for a 0 downtime solution this is truly it. Cloud hosting costs much more than standard vps server because of all the additional hardware required.
Read more

What’s the difference between cloud and virtualization?

Overview It’s easy to confuse virtualization and cloud, particularly because they both revolve around creating useful environments from abstract resources. However, virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system, and clouds are IT environments that abstract, pool, and share scalable resources across a network. To put it simply, virtualization is a technology, where cloud is an environment. Clouds are usually created to enable cloud computing, which is the act of running workloads within that system.  Cloud infrastructure can include a variety of bare-metal, virtualization, or container software that can be used to abstract, pool, and share scalable resources across a network to create a cloud. At the base of cloud computing is a stable operating system (like Linux®). This is the layer that gives users independence across public, private, and hybrid environments. If you have intranet access, internet access, or both already established, virtualization can then be used to create clouds, though it’s not the only option.  With virtualization, software called a hypervisor sits on top of physical hardware and abstracts the machine’s resources, which are then made available to virtual environments called virtual machines. These resources can be raw processing power, storage, or cloud-based applications containing all the runtime code and resources required to deploy it. If the process stops here, it’s not cloud—it’s just virtualization.  Virtual resources need to be allocated into centralized pools before they’re called clouds. Adding a layer of management software gives administrative control over the infrastructure, platforms, applications, and data that will be used in the cloud. An automation layer is added to replace or reduce human interaction with repeatable instructions and processes, which provides the self-service component of the cloud. You’ve created a cloud if you’ve set up an IT system that: Can be accessed by other computers through a network. Contains a repository of IT resources. Can be provisioned and scaled quickly. Clouds deliver the added benefits of self-service access, automated infrastructure scaling, and dynamic resource pools, which most clearly distinguish it from traditional virtualization. Virtualization has its own benefits, such as server consolidation and improved hardware utilization, which reduces the need for power, space, and cooling in a datacenter. Virtual machines are also isolated environments, so they are a good option for testing new applications or setting up a production environment. A practical comparison Virtualization can make 1 resource act like many, while cloud computing lets different departments (through private cloud) or companies (through a public cloud) access a single pool of automatically provisioned resources. Virtualization Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately. Cloud Computing Cloud computing is a set of principles and approaches to deliver compute, network, and storage infrastructure resources, services, platforms, and applications to users on-demand across any network. These infrastructure resources, services, and applications are sourced from clouds, which are pools of virtual resources orchestrated by management and automation software so they can be accessed by users on-demand through self-service portals supported by automatic scaling and dynamic resource allocation.   Virtualization Cloud Definition Technology Methodology Purpose Create multiple simulated environments from 1 physical hardware system Pool and automate virtual resources for on-demand use Use Deliver packaged resources to specific users for a specific purpose Deliver variable resources to groups of users for a variety of purposes Configuration Image-based Template-based Lifespan Years (long-term) Hours to months (short-term) Cost High capital expenditures (CAPEX), low operating expenses (OPEX) Private cloud: High CAPEX, low OPEXPublic cloud: Low CAPEX, high OPEX Scalability Scale up Scale out Workload Stateful Stateless Tenancy Single tenant Multiple tenants How do I move from virtualization to cloud computing? If you already have a virtual infrastructure, you can create a cloud by pooling virtual resources together, orchestrating them using management and automation software, and creating a self-service portal for users—or you can let something like Red Hat® OpenStack® Platform do a lot of that work for you. But moving from virtualization to cloud computing isn’t that simple when you’re bound to a vendor’s enterprise-license agreement, which might limit your ability to invest in modern technologies like clouds, containers, and automation systems.  
Read more

How to Install & Configure SSL Certificates on SAP Web Dispatcher

Step 1: Unzip the certificate files onto the server where you will install the certificate. The ZIP file you downloaded contains the following certificates: SSL certificate (i.e. ssl_certificate.crt) Intermediate CA certificate (i.e. IntermediateCA.crt) Root CA certificate (i.e. Root.crt) Copy the Root CA and Intermediate certificate file onto the server where you will install the certificate. Step 2. Install the SSL Certificate To install an SSL certificate on a SAP Web Dispatcher, follow either one of the following methods: Method 1. Install the SSL Certificate using the Trust Manager If the certificate request dialog is still open, then close it. If the SAP Web Dispatcher’s PSE is not loaded in the PSE maintenance section, then load it by selecting the File node with a double-click and selecting the PSE from the file system. In the PSE maintenance section, choose Import Cert. Response. The dialog for the certificate response appears. Insert the contents of the certificate request response into the dialog’s text box either using Copy&Paste or by loading the file from the file system. The signed public-key certificate (i.e. ssl_certificate.crt, as described in Step 2) is imported into the SAP Web Dispatcher’s PSE, which is displayed in the PSE maintenance section. You can view the certificate by selecting it with a double-click. The certificate information is then shown in the certificate maintenance section. Create a PIN for the PSE.NOTE: It is recommended using a PIN to protect the PSE, especially if the SAP Web Dispatcher is located in your demilitarized zone. Save the data in the trust manager. You are prompted for the location to which to save the PSE. Replace the PSE that you created earlier. If you saved the PSE to a local file on the application server, then copy it to the SECUDIR directory on the SAP Web Dispatcher. Method 2. Install the SSL Certificate using SAPGENPSE Use configuration tool sapgenpse to import the certificate request response into the PSEs. Run the following command: Example: sapgenpse import_own_cert <Additional_options> -p <PSE_file> -c <Certificatefile.crt> -r <Cacertificate.crt> -x <PIN> -p <PSE_Name> Path and file name of the PSE. The path is the SECUDIR directory and the file name is SAPSSLS.pse.for the SSL server PSE or SAPSSLC.pse for the SSL client PSE (if it exists). Path description (in quotation marks, if spaces exist). -c <Cert_file> Path and file name of the certificate request response. Path description (in quotation marks, if spaces exist). -r <RootCA_cert_file> File containing both the Root CA certificate and the Intermediate CA certificate. The Intermediate CAcertificate is to be first followed by the Root CA certificate. Path description (in quotation marks, if spaces exist). For example:Open a Notepad, paste the Intermediate CA certificate (i.e. IntermediateCA.crt as described in Step 2) and Root CA certificate (i.e. RootCA.crt as described in Step 1) in the following order: –BEGIN CERTIFICATE—–[Intermediate 1]—–END CERTIFICATE—–—–BEGIN CERTIFICATE—–[Intermediate 2]—–END CERTIFICATE—–—–BEGIN CERTIFICATE—–[Root CA]—–END CERTIFICATE—– -x <PIN> PIN that protects the PSE Character string. OR By using <strong>> cat intermediate1.crt intermediate2.crt root.crt > ssl-bundle.crt</strong>  
Read more

Quick things to check when you experience high memory levels in ASP.NET

This article describes the quick things to check when you experience high memory in Microsoft ASP.NET. Original product version:   ASP.NETOriginal KB number:   893660 This article will start with some common issues, actions to remedy these issues, and a brief explanation of why these situations can cause problems. ASP.NET Support Voice column In the April 2005 Support Voice column, we inadvertently provided a link to the wrong file. Instead of linking to a download for the Web service, we linked to the XML file returned by the Web service. That link has been corrected. If you’d like to review the article with the correct file attached, see Dynamic page updates using XMLHTTP. What’s considered high memory Obviously, this question depends on volume and activity of specific applications. In general, high memory is when your ASP.NET worker process (Aspnet_wp.exe) or Internet Information Services (IIS) worker process (W3wp.exe) memory is consistently increasing and isn’t returning to a comfortable level. In general terms, a comfortable level would be under 600 MB in the default 2-GB user memory address space. Once the memory level is higher than that comfortable level, we’re doing less than we should be. This behavior may affect other applications that are running on the system. The key is to understand some applications require more memory than others. If you’re exceeding these limits, you may add more memory or add another server to your Web farm (or consider a Web farm). Profiling is also recommended in these cases. It can enable developers to create leaner applications. In this article, we’re looking at a situation where you consistently see memory rise until the server stops performing. Application set up for debugging One reason for high memory that we see here in Support a lot is when you have debugging, tracing, or both enabled for your application. Enabling debugging and tracing is a necessity when you develop your application. By default, when you create your application in Visual Studio .NET, you’ll see the following attribute set in your Web.config file: XML <compilation ... debug="true" /> or XML <trace enabled="true" ... /> Also, when you do a final build of your application, make sure that you do it in Release mode, not Debug mode. Once you’re in production, debugging should no longer be necessary. It can really slow down your performance and eat up your memory. Setting this attribute means you change a few things about how you handle your application. First, batch compile will be disabled, even if it’s set in this compilation element. It means that you create an assembly for every page in your application so that you can break into it. These assemblies can be scattered randomly across your memory space, making it more difficult to find the contiguous space to allocate memory. Second, the executionTimeout attribute (<httpRuntime> Element) is set to a high number, overriding the default of 90 seconds. It’s fine when debugging, because you can’t have the application time out while you patiently step through the code to find your blunders. However, it’s a significant risk in production. It means that if you have a rogue request for whatever reason, it will hold on to a thread and continue any detrimental behavior for days rather than few minutes. Finally, you’ll be creating more files in your Temporary ASP.NET files folder. And the System.Diagnostics.DebuggableAttribute (System.Diagnostics Namespace gets added to all generated code, which can cause performance degradation. If you get nothing else from this article, I do hope you get this information. Leaving debugging enabled is bad. We see this behavior all too often, and it’s so easy to change. Remember it can be set at the page level. Make sure all of your pages aren’t setting it. String concatenation There are applications that build HTML output by using server-side code and by just building one large HTML string to send to the browser. It’s fine, but if you’re building the string by using + and & concatenation, you may not be aware of how many large strings you’re building. For example: C# string mystring = "<html>"; mystring = mystring + "<table><tr><td>"; mystring = mystring + "First Cell"; mystring = mystring + "</td></tr></table>"; mystring = mystring + "</html>"; This code seems harmless enough, but here’s what you’re storing in memory: HTML <html> <html><table><tr><td> <html><table><tr><td>First Cell <html><table><tr><td>First Cell</td></tr></table> <html><table><tr><td>First Cell</td></tr></table></html> You may think that you’re just storing the last line, but you’re storing all of these lines. You can see how it could get out of hand, especially when you’re building a large table, perhaps by looping through a large recordset. If it’s what you’re doing, use our System.Text.StringBuilder class, so that you just store the one large string. See Use Visual C# to improve string concatenation performance .NET Framework Service Pack 1 (SP1) If you aren’t running the .NET Framework SP1 yet, install this SP if you’re experiencing memory issues. I won’t go into great detail, but basically, with SP1 we’re now allocating memory in a much more efficient manner. Basically, we’re allocating 16 MB at a time for large objects rather than 64 MB at a time. We’ve all moved, and we all know that we can pack a lot more into a car or truck if we’re using many small boxes rather than a few large boxes. It’s the idea here. Don’t be afraid to recycle periodically We recycle application pools in IIS every 29 hours by default. The Aspnet_wp.exe process will keep going until you end the task, restart IIS, or restart the computer. This behavior means that this process could be running for months. It’s a good idea for some applications to just restart the worker process every couple of days or so, at a convenient time. Questions to ask The previous were all things that you can fix quickly. However, if you’re experiencing memory issues, ask yourself these questions: Am I using many large objects? More than 85,000 KB are stored in a large object heap. Am I storing objects in Session state? These objects are going to stay in memory for much longer than if you use and dispose them. Am I using
Read more

WordPress Toolkit and Quarantine. How does it work?

How does it work? We’ve often encountered a situation where scanning the server for WordPress sites made the WordPress Toolkit completely unresponsive. After some digging, we found that, most of the time, it is malware infection on one or more WordPress sites on the server that causes this problem. This caused WordPress Toolkit not to properly access certain important files. So it was doomed to eternally wait for files, while not responding to any commands. To address this issue, we added a reasonable timeout for certain WordPress Toolkit operations. The suspicious WordPress websites that WordPress Toolkit finds now go into quarantine mode: 1. Email notification is being sent. Notification text and recipients can be configured under Tools & Settings > Notifications > “WordPress installation is quarantined” for admin/reseller/customer. 2. WordPress Toolkit mark website as “Quarantined” under the WordPress Toolkit (see image above). 3. WordPress Toolkit will skip website from all the automatic tasks such as an update and etc. What if it is not a malware? I receive notifications, but websites are not infected by malware, what can be done? This might be caused by performance issues on one or several websites. For example, a plugin might continuously run a cron task which causes the timeout, which in turn causes the quarantine. Try increasing the value of the following option in the panel.ini file: [ext-wp-toolkit]wpCliTimeoutRegular = 180wpCliTimeoutMaintenanceTimeout = 180      
Read more

What’s the difference between cloud and virtualization?

Overview It’s easy to confuse virtualization and cloud, particularly because they both revolve around creating useful environments from abstract resources. However, virtualization is a technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system, and clouds are IT environments that abstract, pool, and share scalable resources across a network. To put it simply, virtualization is a technology, where cloud is an environment. Clouds are usually created to enable cloud computing, which is the act of running workloads within that system.  Cloud infrastructure can include a variety of bare-metal, virtualization, or container software that can be used to abstract, pool, and share scalable resources across a network to create a cloud. At the base of cloud computing is a stable operating system (like Linux®). This is the layer that gives users independence across public, private, and hybrid environments. If you have intranet access, internet access, or both already established, virtualization can then be used to create clouds, though it’s not the only option.  With virtualization, software called a hypervisor sits on top of physical hardware and abstracts the machine’s resources, which are then made available to virtual environments called virtual machines. These resources can be raw processing power, storage, or cloud-based applications containing all the runtime code and resources required to deploy it. If the process stops here, it’s not cloud—it’s just virtualization.  Virtual resources need to be allocated into centralized pools before they’re called clouds. Adding a layer of management software gives administrative control over the infrastructure, platforms, applications, and data that will be used in the cloud. An automation layer is added to replace or reduce human interaction with repeatable instructions and processes, which provides the self-service component of the cloud. You’ve created a cloud if you’ve set up an IT system that: Can be accessed by other computers through a network. Contains a repository of IT resources. Can be provisioned and scaled quickly. Clouds deliver the added benefits of self-service access, automated infrastructure scaling, and dynamic resource pools, which most clearly distinguish it from traditional virtualization. Virtualization has its own benefits, such as server consolidation and improved hardware utilization, which reduces the need for power, space, and cooling in a datacenter. Virtual machines are also isolated environments, so they are a good option for testing new applications or setting up a production environment. A practical comparison Virtualization can make 1 resource act like many, while cloud computing lets different departments (through private cloud) or companies (through a public cloud) access a single pool of automatically provisioned resources. Virtualization Virtualization is technology that allows you to create multiple simulated environments or dedicated resources from a single, physical hardware system. Software called a hypervisor connects directly to that hardware and allows you to split 1 system into separate, distinct, and secure environments known as virtual machines (VMs). These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware and distribute them appropriately. Virtualization Cloud computing is a set of principles and approaches to deliver compute, network, and storage infrastructure resources, services, platforms, and applications to users on-demand across any network. These infrastructure resources, services, and applications are sourced from clouds, which are pools of virtual resources orchestrated by management and automation software so they can be accessed by users on-demand through self-service portals supported by automatic scaling and dynamic resource allocation.   Virtualization Cloud Definition Technology Methodology Purpose Create multiple simulated environments from 1 physical hardware system Pool and automate virtual resources for on-demand use Use Deliver packaged resources to specific users for a specific purpose Deliver variable resources to groups of users for a variety of purposes Configuration Image-based Template-based Lifespan Years (long-term) Hours to months (short-term) Cost High capital expenditures (CAPEX), low operating expenses (OPEX) Private cloud: High CAPEX, low OPEXPublic cloud: Low CAPEX, high OPEX Scalability Scale up Scale out Workload Stateful Stateless Tenancy Single tenant Multiple tenants How do I move from virtualization to cloud computing? If you already have a virtual infrastructure, you can create a cloud by pooling virtual resources together, orchestrating them using management and automation software, and creating a self-service portal for users—or you can let something like Red Hat® OpenStack® Platform do a lot of that work for you. But moving from virtualization to cloud computing isn’t that simple when you’re bound to a vendor’s enterprise-license agreement, which might limit your ability to invest in modern technologies like clouds, containers, and automation systems.  
Read more

What is KVM?

Overview Kernel-based Virtual Machine (KVM) is an open source virtualization technology built into Linux®. Specifically, KVM lets you turn Linux into a hypervisor that allows a host machine to run multiple, isolated virtual environments called guests or virtual machines (VMs). KVM is part of Linux. If you’ve got Linux 2.6.20 or newer, you’ve got KVM. KVM was first announced in 2006 and merged into the mainline Linux kernel version a year later. Because KVM is part of existing Linux code, it immediately benefits from every new Linux feature, fix, and advancement without additional engineering. How does KVM work? KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need some operating system-level components—such as a memory manager, process scheduler, input/output (I/O) stack, device drivers, security manager, a network stack, and more—to run VMs. KVM has all these components because it’s part of the Linux kernel. Every VM is implemented as a regular Linux process, scheduled by the standard Linux scheduler, with dedicated virtual hardware like a network card, graphics adapter, CPU(s), memory, and disks. Implementing KVM Long story short, you have to run a version of Linux that was released after 2007 and it needs to be installed on X86 hardware that supports virtualization capabilities. If both of those boxes are checked, then all you have to do is load 2 existing modules (a host kernel module and a processor-specific module), an emulator, and any drivers that will help you run additional systems. But implementing KVM on a supported Linux distribution—like Red Hat Enterprise Linux—expands KVM’s capabilities, letting you swap resources among guests, share common libraries, optimize system performance, and a lot more. Building a virtual infrastructure on a platform you’re contractually tied to may limit your access to the source code. That means your IT developments are probably going to be more workarounds than innovations, and the next contract could keep you from investing in clouds, containers, and automation. Migrating to a KVM-based virtualization platform means being able to inspect, modify, and enhance the source code behind your hypervisor. And there’s no enterprise-license agreement because there’s no source code to protect. It’s yours. KVM features KVM is part of Linux. Linux is part of KVM. Everything Linux has, KVM has too. But there are specific features that make KVM an enterprise’s preferred hypervisor. Security KVM uses a combination of security-enhanced Linux (SELinux) and secure virtualization (sVirt) for enhanced VM security and isolation. SELinux establishes security boundaries around VMs. sVirt extends SELinux’s capabilities, allowing Mandatory Access Control (MAC) security to be applied to guest VMs and preventing manual labeling errors. Storage KVM is able to use any storage supported by Linux, including some local disks and network-attached storage (NAS). Multipath I/O may be used to improve storage and provide redundancy. KVM also supports shared file systems so VM images may be shared by multiple hosts. Disk images support thin provisioning, allocating storage on demand rather than all up front. Hardware support KVM can use a wide variety of certified Linux-supported hardware platforms. Because hardware vendors regularly contribute to kernel development, the latest hardware features are often rapidly adopted in the Linux kernel. Memory management KVM inherits the memory management features of Linux, including non-uniform memory access and kernel same-page merging. The memory of a VM can be swapped, backed by large volumes for better performance, and shared or backed by a disk file. Live migration KVM supports live migration, which is the ability to move a running VM between physical hosts with no service interruption. The VM remains powered on, network connections remain active, and applications continue to run while the VM is relocated. KVM also saves a VM’s current state so it can be stored and resumed later. Performance and scalability KVM inherits the performance of Linux, scaling to match demand load if the number of guest machines and requests increases. KVM allows the most demanding application workloads to be virtualized and is the basis for many enterprise virtualization setups, such as datacenters and private clouds. Scheduling and resource control In the KVM model, a VM is a Linux process, scheduled and managed by the kernel. The Linux scheduler allows fine-grained control of the resources allocated to a Linux process and guarantees a quality of service for a particular process. In KVM, this includes the completely fair scheduler, control groups, network name spaces, and real-time extensions. Lower latency and higher prioritization The Linux kernel features real-time extensions that allow VM-based apps to run at lower latency with better prioritization (compared to bare metal). The kernel also divides processes that require long computing times into smaller components, which are then scheduled and processed accordingly. Managing KVM It’s possible to manually manage a handful of VM fired up on a single workstation without a management tool. Large enterprises use virtualization management software that interfaces with virtual environments and the underlying physical hardware to simplify resource administration, enhance data analyses, and streamline operations. Red Hat created Red Hat Virtualization for exactly this purpose. KVM and Red Hat We believe in KVM so much that it’s the sole hypervisor for all of our virtualization products, and we’re continually improving the kernel code with contributions to the KVM community. But since KVM is part of Linux, it’s already included in Red Hat Enterprise Linux—so why would you want Red Hat Virtualization? Well, Red Hat has 2 versions of KVM. The KVM that ships with Red Hat Enterprise Linux has all of the hypervisor functionality with basic management capabilities, allowing customers to run unlimited isolated virtual machines on a single host. Red Hat Virtualization contains an advanced version of KVM that enables enterprise management of unlimited guest machines. It’s ideal for use in datacenter virtualization, technical workstations, private clouds, and in development or production.    
Read more

Setting Up IIS Application Pool (Windows)

IIS Application Pool contains all web applications installed on your sites. If your service provider allocated a dedicated IIS application pool for your sites, then you can have a level of isolation between web applications used by your sites and web applications used by other hosting users who host their websites on the same server. Because each application pool runs independently, errors in one application pool will not affect the applications running in other application pools. Once you switch on the application pool, all web applications on your websites will be using it. To switch on dedicated IIS application pool for your websites: Go to Websites & Domains > Dedicated IIS Application Pool for Website. Click Switch On. Specify the maximum number of worker processes permitted to service requests for the IIS application pool and the amount of time (in minutes) a worker process will remain idle before it shuts down. To limit the amount of CPU resources that the IIS application pool can use, clear the Unlimited checkbox and provide a number (in percents) in the Maximum CPU use (%) field, select the action that IIS performs when a worker process exceeds the configured maximum CPU usage, and specify the reset period for monitoring of CPU usage on an application pool. When the specified number of minutes passes since the last reset, IIS resets CPU timers for the logging and for limit intervals. Select the required recycling options depending on time or resources consumption to periodically recycle the IIS application pool and to avoid unstable states that can lead to application crashes, hangs, or memory leaks. Click OK. To stop all applications running in the application pool: Go to Websites & Domains > Dedicated IIS Application Pool for Website. Click Stop. To start all applications in the application pool: Go to Websites & Domains > Dedicated IIS Application Pool for Website. Click Start. By default, the IIS application pool is running in the 64-bit mode. To run certain old versions of applications, you may need to enable the 32-bit mode. To enable IIS to run applications in the 32-bit mode: Go to Websites & Domains > Dedicated IIS Application Pool for Website. Select the “Enable 32-bit applications” checkbox and then click OK. If you run applications that are known to have memory leaks or become unstable after working for a long time, then you might need to restart them from time to time. To restart all applications running in the application pool: Go to Websites & Domains > Dedicated IIS Application Pool for Website. Click Recycle. To switch off the dedicated IIS application pool for your websites: Go to Websites & Domains > Dedicated IIS Application Pool for Website. Click Switch Off.
Read more

Twitter web client outage forces users to log out, blocks logins

​Twitter is experiencing a worldwide outage affecting their web platform that prompts users to logout and prevents them from accessing tweets. The outage began at around noon EST and only affects the web/desktop version of Twitter, not the mobile platform. While attempting to use Twitter, the site may redirect you to https://twitter.com/logout/error and display an error message stating that “Something went wrong, but don’t fret – it’s not your fault. Let’s try again.”  As you can see from the image below, this page indicates that you are currently logged in to Twitter. However, as you use the site, Twitter will sometimes indicate that you are not logged in and prompt you to do so. At 1:42 PM EST, Twitter’s support account tweeted that they have resolved the errors and users can log into the site again. However, the Twitter status page continues to not show any outages. Update 9/28/21 1:23 PM EST: Added tweet from Twitter’s support account.Update 9/28/21 1:45 PM EST: Replaced tweet with resolution message. NOTE:: This article is copyright by bleepingcomputer.com and we are using it for educational or Information purpose only  
Read more
Cart

No products in the cart.