How to fix mdadm degraded array in Ubuntu Linux

I'm currently using Ubuntu as my desktop OS. I've got my OS installed on on the primary HDD and I have two secondary HDD's configured in a RAID 1 (mirrored) setup for files that I want to make sure I always have a backup of. Just recently, when doing an audit of my machine, I found that one of the HDD's in my RAID 1 had degraded. Since this is a software raid array, I use 'mdadm' to manage it. The following will show how to quickly re-sync a drive that has degraded but may still be usable.

Before I get into this, it's important to understand the difference between when mdadm reports a drive as 'failed' and when it reports a drive as 'degraded'. A degraded array will be reported as follows:

Command:

$ sudo cat /proc/mdstat

Result:

mdadm degraded ca proc mdstat

Note in the image above tthat the bad drive is indicated with an underscore '_' instead of an 'F'. An underscore '_' means some data was found to be faulty, and mdadm marked the drive as 'degraded. An 'F' means the drive has totally failed - time to get a new drive. When data on a drive is found to be faulty, it usually means some sectors on the drive went bad, but the drive isn't necessarily dead yet - and may still be usable for a while longer.

You'll need to decide on your own if you want to try to continue to use the possibly failing drive, or if you are comfortable continuing to use it in your raid.

In my case, I was comfortable continuing to use the degraded drive. So to fix that, all you need to do is remove the drive and re-add it. In the image above, notice that the failed drive was the 'sdb1' drive, so that's the drive we'll need to remove and re-add.

Command:

# Remove the drive from the md0 array
sudo mdadm --remove /dev/md0 /dev/sdb1

# Check the array and make sure the drive was removed
sudo cat /prov/mdstat

# Add the troublesome drive back to the md0 array
sudo mdadm --add /dev/md0 /dev/sdb1

Result:

remove then re-add drive to md0

Now you should be all set! Do one last check of your array to make sure things are syncing up like they should:

Command:

sudo cat /proc/mdstat

Result:

resync

Hope this helps!

How to fix Lucee 'Handler "BonCode-Tomcat-CFM-Handler" has a bad module "ManagedPipelineHandler" in its module list' Error.

Handler "BonCode-Tomcat-CFM-Handler" has a bad module "ManagedPipelineHandler" in its module listFor whatever reason IIS likes to set the default version of .NET on some versions of IIS to 2.0. This is generally rediculous since 4.0 has been around for some time and even when 4.0 is installed and working, MS will default to 2.0.

If you install Lucee server on to your windows server and get this error, there are several possible causes:

1) You need to use a more recent version of .NET for your application pool. The fix is to adjust your .NET application pool version to 4.0 (or above) for that site, then restart the pool. Once you do that, your Lucee install should work perfectly.

IIS Application Pool Switch from 2.0 to 4.0

2) You need to ensure that you have .NET Extensibility turned on in your IIS Install. In windows 7, this is what the window looks like:

Enable .NET Extensibility

3) You have a .NET version cconflict. You'll need to remove all versions of .NET from your machine and re-install Lucee to let the installer handle installing .NET.

Hope this helps!

mod_cfml 1.1 is released! Fast, reliable, and new features!

For those of you familiar with the mod_cfml project, you know it consists of two separate sections: The web server adapter that provides information about the web site being served, and the Tomcat valve, which takes that information and automatically processes it within Tomcat - creating a new host, alias, etc as needed within Tomcat so that Tomcat will match the information coming from the web server. Both the web server adapter and the Tomcat valve have been greatly enhanced in version mod_cfml version 1.1.

New features in The Tomcat valve:

  • Speed: the process of creating a new host in Tomcat has been greatly reduced and has taken less than a second in all our tests - down from several seconds in previous versions of mod_cfml. Jar scanning is disabled by default.
     
  • Speed: the process of "waiting for context files" has been completely removed as it is no longer necessary.
     
  • Speed and memory footprint: only one Tomcat “Host container” is created per Apache/IIS virtualhost/context. All aliases / default site hosts / IP-based hosts, are now added as aliases. The process of creating a new alias is lightning fast.
     
  • Bugfix: Thread safety errors have been corrected, and hosts are now created reliably in every event.

 

Next, for the web server adapter, for Apache 2.4 the web server adapter has been completely re-written in C! This means that any system can run mod_cfml natively without the need for mod_perl. The mod_perl version of mod_cfml will still be available for Apache 2.2, but will no longer be maintained. With Apache 2.4 and a native C-module, mod_cfml can run natively on any system with extreme speed and only a few lines of config!

The new mod_cfml.so also includes the following enhancements:

  • Feature: SES URL support is now handled automatically using path_into. Previously, URLs like /some/page.cfm/id/123 would not work out of the box with Tomcat. With mod_cfml 1.1, now they do! This feature is supported in Lucee, OpenBD, and Railo.
     
  • Security: A shared secret key implementation has been added to prevent unauthorized context creation.
     
  • Feature: Virtual directories, or “Aliases” in Apache, are now passed by default from the mod_cfml.so file and handled automatically by Lucee for the current request. Check the documentation for more details on this.

 

Documenation for mod_cfml 1.1 is HERE.

Installation instructions for mod_cfml 1.1 is HERE.

 

Huge "Thank you!" to Paul Klinkenberg and Bilal Soylu for their amazing dedication to this project. You two are awesome!

 

So... what are you waiting for? Install! Upgrade! Stay secure and have fun with CFML!

How to disable TLSv1.0 for PCI Compliance in Apache 2.2

Just recently our PCI compliance scanner started complaining about TLSv1.0 being enabled on our web server, so I had to go figure out how to disable it. The following is what I ended up with in our VirtualHost config which did what I wanted it to:
 
<VirtualHost xx.xx.xx.xx:443>
        ::snip::
        SSLEngine on
        SSLProtocol +TLSv1.1 +TLSv1.2
        SSLCompression off
        SSLHonorCipherOrder On
        SSLCipherSuite ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA:ECDHE-RSA-DES-CBC3-SHA:HIGH:!MD5:!aNULL:!EDH
        Header always append X-Frame-Options SAMEORIGIN
        ::snip::
</VirtualHost>

PCI Compliance TLSv1.0I simply removed the "all" option I had there previously and just manually enabled the TLSv1.1 and TLSv1.2. I also added the "Header" bit so that common browsers wouln't put our site in frames.

Now, checking our SSL certificate using the SSL Server Test we get an A rating. The downgrade from A+ being due to an upstream older SHA1 hash that we have no way to change and doesn't directly effect the security of our site, so as far as I'm concerned we look good.

The draw back is that older machines, such as those running Windows XP and versions of Android older than 4.2.2 will no longer be able to connect to us. Sorry about that. Since this is required for PCI compliance, there's not much we can do.

No webcam Image on USB webcam for Ubuntu 14.04

I have been doing google hangouts more for work, and most folks who use Google hangouts also have a webcam setup so you can see the person you're speaking with. This is useful because as most communicators know, so much communication is non-verbal. It's helpful to have an image of the person you're talking to.

webcamI have just recently changed out my PC case and motherboard for an older server board. The server board, while being slower on the MGhz and RAM speeds, had a lot more CPU cores and RAM amount. On my previous PC, running an older version of Ubuntu (13.04), my USB webcam worked flawlessly. However, on this "newer" PC, my webcam no longer works. First my USB camera didn't even register. I had to unplug it and replug it in for it to register at all. IE: nothing was showing up when I entered the following:

ls /dev/video*

As soon as I unplugged and replugged in the USB video camera, I would get a device here, but when I started up "cheese" (a cam viewer for Linux), I got nothing but a black screen. What was going on?

Looking up some articles on debugging Ubuntu camera issues, I happened on an article that mentioned I try a camera app called "guvcview". I did, and when I ran it, I got the following error message over and over:

Could not grab image (select timeout): Resource temporarily unavailable

Mkay... why can't you grab an image if you see that I have a camera now?

Turns out, the bottom of the article that I found earlier had the answer:

"Some newer webcams produce high resolution images at 30fps. This is a lot of data. If your USB cable path is too long or convoluted, frames can drop causing flicker or no picture. If you find MJPG format mode produces less flicker than YUYV format mode, then this could be the case. Try plugging the webcam directly into the computer. Then try progressively reconnecting your extension cables or hubs to see how the picture changes. Check you don't have a low speed hub.

You may also see:

Could not grab image (select timeout): Resource temporarily unavailable

This could indicate the frame data not arriving down the long cable fast enough. Check also for cable kinks and tight loops, which like ethernet, cause delayed packets."

BINGO! Apparently my older server board only had the earliest USB ports, which means they are really, really slow. I tweaked with the settings on the "Video" tab in guvcview and was able to get a picture of some kind by dramatically lowering the resolution and camera output type. Unfortunately the camera is still barely usable. Couple that with the relatively low Ghz value of the CPU and slow RAM, I'm not so sure this old server board is worth being a desktop.