Tagged: hardware Toggle Comment Threads | Keyboard Shortcuts

  • BrainwreckedTech 3:06 am on June 5, 2010 Permalink | Reply
    Tags: boot splash, , graphics cards, hardware, , , , plymouth, ,   

    BUG: Plymouth (Ubuntu Boot Splash) Reverts To 640×480 With Proprietary NVIDIA Driver 

    Ubuntu Logo Circle OnlyNVIDIA Logo

    UPDATE: Try sudo dpkg-reconfigure grub-pc first. With that, I no longer needed to make the tweaks in this article.

    Gah! It was so beautiful after the initial setup! 1680×1050 of native-res boot screen. Then I installed the proprietary NVIDIA drivers and it fell apart. Not only did the resolution fall down to 640×480, but the text input box dipped below the screen (I use encryption) and the dots seemed to cut into one another. I don’t know who deserves a swift kick in the [pick body part here], but I lean towards NVIDIA as I could not install in a VESA text mode AND things worked just fine with the open-source nv module.

    Andrew’s WebUpd8 blog did not help. Neither did Comment #34 for Bug #526892 on Launchpad. Of all places it was this post on Softpedia that brought things back.

    (More …)

    Advertisements
     
  • BrainwreckedTech 3:15 pm on April 30, 2010 Permalink | Reply
    Tags: blkid, fdisk, fstab, , hardware, , mkfs, , uid   

    Adding A New Hard Drive To A Live Linux System 

    Linux LogoThis is a little guide I came up with while adding a new drive to my file server.

    First ordeal is to find out where the new hard drive ended up. I like to eliminate as much guesswork as possible. In that regard, fdisk is a great tool because it’ll let you know which devices have invalid partition tables.

    (More …)

     
  • BrainwreckedTech 1:04 am on March 21, 2010 Permalink | Reply
    Tags: , hardware, motherboards, , , ram   

    INFO: Converting Timings To Multipliers When Over-Clocking 

    OK, this isn’t going to be a complete guide to over-clocking or even a beginner’s guide. This is just going to be some information on things that bust my balls when it comes to pushing your hardware beyond the specs.

    You see, BIOS manufacturers try to be helpful by giving you options like setting the HT Link to a specified frequency like 800MHz or 2.0GHz. And they pull the same crap with DRAM, giving you options for 333MHz, 533MHz, and so on. What they are actually doing is setting multipliers and dividers, and these values become invalid when over-clocking the base clock.

    So here’s my “cheat sheet” when it comes to over-clocking with the AMD platform:

    (More …)

     
  • BrainwreckedTech 3:24 am on May 20, 2009 Permalink | Reply
    Tags: , gigabit ethernet, hardware, , , , ,   

    HOWTO: Optimize Gigabit Networking in Linux 

    Ethernet Cable FullLinux Logo Half Even if you have a gigabit networking adapter and a gigabit switch capable of jumbo frames, Linux still uses the default MTU size of 1500. To get something better, you need to configure things by hand.

    The reason for this is that the IETF has never standardized anything above 1500. You might very well have gigabit ethernet equipment that either does not have jumbo frame support, or may be very disappointed to find out that “jumbo frame” can be used to describe any packet size between 1500 and 9000.

    To make matters worse, not every gigabit ethernet switch handles mixed networking the same. You would think a gigabit switch would guarantee a 1gb connection between two computers with 1gb networking adapters, but under various circumstances, this isn’t always the case. Optimally, it would be best to separate your 100mb and 1gb devices onto two different switches, but this isn’t guaranteed to work.

    Now that we have all the caveats out of the way, read on if you want to start optimizing.

    (More …)

     
    • strange 4:52 pm on June 11, 2009 Permalink | Reply

      why do the MTU’s have to match? a workstation accessing the internet will have several devices between it and the internet, a linksys router running linux being a common one, and it will be a much lower MTU.

      the lowest common denominator will be all the gear out at the boundaries, which all the machines and servers will likely have to talk to at some point. so what do you do then?

      • brainwreckedtech 11:11 pm on June 12, 2009 Permalink | Reply

        The MTUs don’t have to match unless you enjoy having your LAN speed crippled as your computers break apart packets on their own trying to reach a common denominator. While the advice here is for optimizing the speed that computers communicate on a LAN, not the Internet, keep in mind that computers with bigger MTUs will have no trouble accepting smaller packets from computers with smaller MTUs. Your Internet download speeds won’t be affected, but your upload speeds might. However, most people’s Internet connection speed (in the US, at least) doesn’t even hit 1mb/s upstream. Factor that measly speed with all the latency due to routing, server capacity, etc., and the upload speed degradation from mismatched MTUs with the Internet becomes the least of your problems.

        By no means should you adjust the MTU of a machine on your LAN if its sole purpose to upload data to the billions of anonymous users on the net. At the same time, you should consider getting that machine off your private network in the event a security breach. A simple double-NAT will do.

    • BlueSherpa 2:17 pm on February 7, 2010 Permalink | Reply

      Corrections:

      The “theoretical” limit of gigabit, also known as the wire speed, is 125MB/s (wikipedia, 2010).

      “one can only get a little over one-third of the theoretical limit of gigabit” is not true. 900Mb/s can be attained at the normal 1500 byte MTU setting (Schluting, 2007).

      Wikipedia http://en.wikipedia.org/wiki/Gigabit_Ethernet

      Schulting http://www.enterprisenetworkingplanet.com/nethub/article.php/3485486

      • brainwreckedtech 4:33 pm on February 8, 2010 Permalink | Reply

        125mB/s (m being read the decimal “mega” of 1000) was never in dispute, but I did botch my original 101MB/s (M being read as the computer “mega” of 1024). The correct calculation is 1,000,000,000 bits ÷ 8 bits/byte ÷ 1024 bytes/kilobyte ÷ 1024 kilobytes/megabyte = 119.21MB/s.

        I’ll recant my “never,” but Schulting used server-class hardware and mem-to-mem copying. Consumers are going to be hard-pressed to find such equipment and are more apt re-use old equipment and go by drive-to-drive copying over Ethernet. Following his advice gave my speed a bump to an average of 43MB/s with the range anywhere from 36MB/s to 50MB/s. Nice, but far short of 119 MB/s.

        • august 8:39 pm on February 10, 2010 Permalink | Reply

          In networking, “mega” was always the proper “mega” – 1000000.

          The only thing that really ever used the 2^10 thing was memory sizes, because they naturally come in powers of 2.
          (And please use the proper prefixes (MiB etc) if you’re going to use the binary variant.)

          So 125MB/s is the right number.

          Also, if you’re doing drive-to-drive copying, you’re probably measuring the speed of your disk, and not the ethernet.

          • brainwreckedtech 4:37 am on February 11, 2010 Permalink | Reply

            Now you’re just picking nits.

            210 is used for all storage sizes, not just memory.

            125mB/s is a correct number. So is 119MB/s when that number is the theoretical ceiling that will be reported by any OS transferring a file.

            I’ve seen MiB, and guess what? Fuck it. Using M for 220 and Mi for million is great, but what do you do for the giga level? Do they use G for 230 and Tr for trillion? NO, THEY USE Gi FOR TRILLION. And you can’t go back and say Gi = decimal giga because, if that was their intent, they should have used Me for decimal mega. So you can try and use that system if you want. I see no harm in using caps for bigger values and lowercase for smaller values because — at least to me — it makes sense and can be made consistent.

            Finally, where do you think the data is coming from that’s being served over Ethernet? The config I’m using is two 250GB Samsung SP2504C drives in software RAID 0 in a file server. These drives have been rated for an average random reads and writes around 45MB/s. RAID 0 absolutely can double the average speed of random reads and writes, so that gives me a ceiling of 90MB/s. There is negligible, if any, difference between Linux software RAID and hardware RAID. I was only getting 50MB/s tops, so it wasn’t the hard drives. The NICs are on-board, so it isn’t the PCI bus. And even if it was, it was still below the 78MB/s limit of 33MHz PCI.

            • ah 6:35 am on March 12, 2014 Permalink | Reply

              when I test, I dont copy data from and to actual disk drives, I use dd to copy /dev/zero over the network to /dev/null at the other end, which cuts out the disk drive completely. Do you not use this method?

              • BrainwreckedTech 4:02 am on March 19, 2014 Permalink | Reply

                Not a bad idea, but shortcuts (large stream of only 0’s) can be taken at the kernel level. Better to take an ISO (or file created with /dev/urandom) inside a tmpfs and copy that.

    • Darr247 12:02 pm on December 16, 2010 Permalink | Reply

      You’ve got it backwards, so no wonder you think it’s a bad idea. 😉

      M=10^6; Mi=2^20

      The idea is, hard drive manufacturers used (some would argue “correctly”) the SI version of mega (ergo influenced what peoples’ idea of MB should be on computers) to mean 1,000,000 bytes, so back at the turn of the century the IEC designated MiB to mean 1,048,576 bytes.

      It’s pronounced mebibytes, short for mega binary bytes.

      Likewise,
      G = 10^9; Gi = 2^30 (giga/giba)
      T = 10^12; Ti = 2^40 (tera/tebi)
      et cetera (P/Pi, E/Ei, Z/Zi) up to
      Y = 10^24; Yi = 2^80 (yotta/yobi)

      I don’t know that SI has abbreviations for values greater than 10^24… but when we get to 10^100 capacities, hard drive manufacturers will no doubt start using “googlebytes” thanks to the non-math guy who originally registered the domain name (instead of “googol” which is what the search engine’s inventors intended).

      Implicitly, b=bits and B=bytes, also.

      I’ve never seen anyone use G or Gi for trillion. Got a cite?

      Finally, thanks for the linux gigabit tuning tips.

    • BrainwreckedTech 6:07 am on December 26, 2010 Permalink | Reply

      You’re right about me being backwards on X and Xi.

      The hard drive manufacturers started using the decimal interpretation because it meant they could advertise bigger numbers. (Never attribute to malice what can be adequately explained by stupidity…or greed.) No one paid much mind back in the days of the kilobyte because the difference was small, and we’re used to a bit of fibbing from marketing departments. The excuse, “you lose some space due to formatting,” held enough truth to keep most users calm. I can’t recall exactly when the difference became a major ordeal, but fuzzy hindsight says it probably came with the introduction of the first gigabyte drives — kinda hard to chalk the drop from 1GB to 953MB to formatting.

      Tangent: Networking came after hard drives. They took a page from hard drive manufacturer’s play book and took it a step further by never graduating beyond bits.

      You can see Gi in use in Gnome and KDE. Besides, by definition, it is one trillion. 1,000,000,000 or 1,073,741,824 depending on which definition.

    • Pyrrhic 1:26 pm on February 23, 2011 Permalink | Reply

      Thank you for posting this I had a SIOC… etc error, your post allowed me to fix this. The MTU discussion is also very, very useful. Again…thanks!

      Best,

      P.

    • Markus Torstensson 6:19 am on June 23, 2011 Permalink | Reply

      Thanks dude 😀 Works like a charm. Kinda shame about the prefix-flamewar you had to put up with.

  • BrainwreckedTech 1:27 am on May 14, 2009 Permalink | Reply
    Tags: binary drivers, bleeding edge, , graphics, hardware, , , , ,   

    Ubuntu And The GeForce 200 Series 

    Ubuntu Logo
    nVidia GeForce Logo

    As of this writing, I’m getting conflicting views on the nVidia GTS 200 series of cards and Jaunty. Canonical says the GTS 250 is supported in 9.04, which uses driver version 180.44. However, nVidia states that driver version 180.51 adds support for the GTS 250. And while nVidia gives you the 180.51 drivers if you specify a GTX 295 card, driver package 181.20 on the Windows side officially states support for that card. Ah, the bleeding edge.

    In all probability, you can use 180.44 but your card will come up as an unkown nVidia card. Try the Canonical-approved driver installation first with sudo apt-get install nvidia-glx-180 (if you don’t already have the driver installed).

    If that doesn’t work, use the Canonical Launchpad driver (180.53)

    If that doesn’t work, there’s always manual installation.

     
  • BrainwreckedTech 3:15 am on April 23, 2009 Permalink | Reply
    Tags: apex, , hardware, newegg, , power supplies   

    REVIEW: Apex / Allied SL-275TFX 

    Apex Allied SL-275TFX

    The Guilty Party
    I really would execute it if I carried guns

    The fun never stops. I found my web server dead today due to a bad power supply. However, unlike some things that succeed in just barely outlasting the warranty (here’s evil eyes staring at you, Samsung and Microsoft), this thing broke down right before. I’ve contacted NewEgg about a replacement, but we’ll see how that goes. (I didn’t buy the power supply outright. It came bundled with the case, but Newegg just so happens to carry the exact replacement part.)

    UPDATE: I had to send the whole thing back to Newegg, which doesn’t carry the product any more. Even though their policy will allow me to select a similarly-priced case, I’d rather have this whole nonsense over with by now by just having the friggin’ power supply replaced. I’d never thought I say this, but this is one prospect of Newegg’s business model that needs to change. Sending back the entire bundle just because a part of that bundle failed, just because individual parts weren’t listed line-by-line, which you happen to sell separately, is rather irritating.

     
  • BrainwreckedTech 1:34 am on April 17, 2009 Permalink | Reply
    Tags: , , hardware, , , , , , , ,   

    Mac Vs PC: It’s About The User, Stupid 

    With all of the recent stink Microsoft has raised over its Laptop Hunter commercials, I figured I’d go to one of my old blogs and pull up some old research I did on my own. This research was done back in 2006. Some evidence of this is Dell’s CRT, DDR2 RAM costing $100 in the 1GB-and-lower range, and Dell’s PATA drive. In the end, it was still a toss-up that depended on what you wanted out of your computer. If you just wanted to browse the Internet and read email, or wanted a bare-bones computer, Dell won. If you wanted a bit more, The Mac Mini won.

    This highlights something both Apple and Microsoft need to come to terms with.

    Don’t need a lot of power? The PC’s cheaper components will work just fine.
    Looking for something more but don’t want to deal with the intricacies of computers? Get a Mac.
    Looking for the ultimate in choice and power? The PC delivers.

    Instead, we get Microsoft and Apple trying to convince us of their tech godliness. Their solutions fit everyone’s needs every time! Honest! That’s why we have Microsoft fanboys getting scorned for trying to suggest that PCs are so easy to use while Apple fanboys get scorned for trying to add a bunch of stuff to the price of a PC that some users just don’t need for and suggesting their systems are more powerful than the PC equivalent.

    (More …)

     
  • BrainwreckedTech 10:21 pm on April 5, 2009 Permalink | Reply
    Tags: , , , hardware, ,   

    What The Hell Happened To Dell? 

    BRAINWRECKED

    I remember when Dell was the gold standard of the PC industry. When my wife and I were dating, my now-father-in-law’s Dell desktop started having some trouble and they narrowed it down to some RAM. They sent him some new RAM sticks, only to have those sticks bork his computer. So they sent him a complete CPU/mobo/RAM replacement. Nary a complaint from him.

    Fast forward nearly 10 years. I regret recommending my brother go with Dell when looking for a laptop. His hard drive has failed on him for a second time. When he calls, it’s like pulling teeth. Everyone has a foreign accent which in itself is not a bad thing, but it’s obvious Dell has completely outsourced its tech support. It’s also obvious that these employees are struggling with a mess that Dell has created for them. Despite supplying the express service code during the IVR (Integrated Voice Response aka talking to a machine) interaction, the first rep asks for it again. The first tech then forwards the call to desktop hardware support. As this happens every freakin’ time, this is more systematic ineptitude than it is employee ineptitude. After the second tech forwards us to laptop hardware support, we’ve already wasted 25 minutes on the phone.

    (More …)

     
  • BrainwreckedTech 1:47 am on January 30, 2009 Permalink | Reply
    Tags: audio, hardware, jack, , , ,   

    Can’t Monitor Your Mic-In? JACK To The Rescue! 

    jack-screenshot PROBLEM: I have my Xbox 360’s sound routed through my desktop PC via Line-In, but that requires the loud PC to be on. I wanted to switch to using my laptop instead, BUT THERE’S NO HARDWARE MIXING for mic-in, even though the mic-in on my laptop is stereo and therefore obviously doubles as line-in.

    SOLUTION: JACK Audio Connection Kit for Linux allows you to mix mic-in into your computer’s speaker-out via software mixing.

    (More …)

     
  • BrainwreckedTech 2:12 am on December 23, 2008 Permalink | Reply
    Tags: , , , hardware, , , , , ,   

    EeePC and Ubuntu 8.10 

    Ubuntu Logo
    EeePC Logo

    Here we go again. The last time I installed Ubuntu (or rather, Xubuntu) onto Asus’ EeePC was a bit of ordeal. This time around, things are….still an ordeal unfortunately.

    First off, it seems people have a got a little fed up with Canonical and made their own kernel. (Behold the beauty of the GPL!) This kernel has all of the hardware modules the EeePC needs compiled in, including the modules for the wireless chipset (which are GPL and could be included for installation in the main distribution, but aren’t). Additionally, there’s an experimental lean version of the kernel that strips out modules that aren’t needed on the EeePC. Boot time is significantly improved, and overall performance also gets a slight boost with this kernel.

    Additionally, there’s now an eee-control package available that allows you to easily change settings like WiFi on/off and performance settings from a system tray applet, and there’s been more poking around with SSD performance. The old script that was developed is no longer needed.

    (More …)

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel