@michalorman.com

  • Commands And Queries

    March 16, 2015

    After a few lost years in software architecture, during which developers were rushing towards so called “productivity” using word “pragmatic” as a justification for their poor design, finally we are experiencing conversion towards the good old OOP practices (or abandoning OO programming at all). As hacked out codebases growth developers realized that maintainability, testability and extendability is an important factor. It’s a perfect opportunity to remind some basics.

    One of less known principles is command-query separation (CQS). As introduced CQS applied to methods, but over time it was also applied for application architecture where query and command interfaces were clearly separated. In this post I’d cover the former case.

    The idea behind the CQS is to divide methods into following categories:

    • Queries, which examine system’s state, returns value and do not cause side effects.
    • Commands, which change the state of a system (world) but do not return a value.

    We use queries to test object’s or system’s state while commands are used to mutate that state. Personally I like to extend this list with Factory methods, which are somewhat a combination of command and query. The purpose of a factory method is to create and return an object. It is an abstraction over object creation process.

    Queries

    Query method examines the state of a system. I’m using a word system instead of object by purpose, as tests are not limited to a single object. For example testing file existence is communicating via file system not just checking whether object’s file attribute is set.

    Query methods are those which look like this:

    process.pending?
    File.exists?(path)
    transaction.state

    Executing above methods should never change the state of any part of a system. However it is allowed that the result of a query method is evaluated lazily, eg.:

    s3object.exists?('/some/s3/key')

    The S3 connection can be established whenever it is needed (exists? method called).

    This type of methods should rather return primitive values which are used for control execution flow (it means that values returned by query methods are used in condition statements like if). Why you should prefer returning primitive values instead of regular object? To avoid train wrecks (and do not violate the Law of Demeter).

    Train wreck is something like this:

    user.profile.address.street
    entry.where(type: :draft).joins(:attachmenets).all

    Train wrecks are considered as a code smell. You might be asking why you should avoid train wrecks? They’re so convenient! Well if convenience is everything you need from the code then go ahead and write them, but if you are serious about testability and maintainability then embrace delegation! Delegation encapsulates implementation details, as the caller shouldn’t care how to get a street out of a user object. Train wrecks are fine for simple and small apps. But if you expect that application will grow (and which nowadays isn’t?) it’s better to avoid them earlier than later.

    Train wrecks are why many developers find so difficult writing tests. The amount of stubbing they need to do in order to test simple things is overwhelming. We’re calling that setup hell or pain. Delegation basically boils down mocking to just one method call drastically reducing the code required to setup a test.

    We could argue the ActiveRecord example as by some those kind of implementation is called Fluent Interface. However it is very hard to unit test it as tests requires bazillion of stubs and mocks of objects returned along to the all method invocation. That chain exposes lot of implementation details like used interface - ActiveRecord, database type - Relational, supporting join statements, column name - type together with its allowed value - draft relation - attachments. Lot of reasons to change.

    Commands

    As said, commands are used to mutate the state of a system. They instruct object to perform some action. Typically they shouldn’t return value as commands are not used to flow control, however often we see violation of that rule, one of which is in my opinion acceptable and is called a factory method.

    Commands are methods which looks like this:

    Resque.enqueue(SendNotificationJob, user_id)
    user.save
    document.ready!
    service.call(transaction_id)

    For some reason developers often violate rule of not returning a value by commands. Most often they return true or false to indicate whether command performed successfully or not (or to be more precise they are returning values which evaluates to true or false). Well this is the fastest way to get spaghetti code with a lot of if-else statements. Commands and their return values shouldn’t be used for flow control. Commands should follow invoke and forgot approach. If anything goes wrong exception should be raised.

    Factories

    Factories weren’t originally specified when CQS term was crafted. However I’m defining them as a special type of commands. The purpose of a factory method is to encapsulate object creation. It creates and returns new object.

    Example of factories are:

    @post = Post.create(post_params)
    @user = CreateUser.call(user_params)

    Except the returned value factories there are no other differences between factories and commands.

    Why should I care?

    Knowing the method type is very useful as it indicates how to write tests. Queries are mostly stubbed whereas for commands we need to setup a mock expectation ensuring command execution with correct parameters. To learn more about stubs and mocks I suggest reading Uncle Bob’s The Little Mocker post.

    Knowledge about method type makes much easier to avoid typical code smells and allows making code more maintainable and easier to test. Personally I like to identify the piece of code which can be wrapped into a command method with single responsibility. That allows me making methods which are much shorter and easier to read as well designed code passes the tests and reveals intention.

  • Fetch Your Environment Variables

    February 05, 2015

    So you’ve just read The Twelve-Factor App and decided that you’re going to store your configuration in the environment. Armed with dotenv you’ve changed configuration of AWS SDK to this:

    AWS.config(
      access_key_id:      ENV['S3_ACCESS_KEY'],
      secret_access_key:  ENV['S3_SECRET_KEY'],
      region:             ENV['S3_REGION']
    )

    Everything is working fine on your local machine, so you are making commit and pushing changes to the origin. Now you need to update server configuration so you’re adding following to .bashrc:

    export S3_REGION=us-west-2
    export S3_ACCESS_KEY=CE358D4DB14B5BDAF5DCDD30E2C8BD7E
    export S3_SECRET_KEY=55837e48dbcdf98d8277033086d5502b

    Deploy, and… not working. Environment variables are not available. If you, like me, always have problems figuring out whether you are in login/interactive shell or not, and juggling configuration between .bashrc and .bash_profile perhaps this figure will help you to understand where settings should go. But you must be aware, that if you are using solutions like Upstart/Monit to start/stop your application, you can be confused about which variables and what PATH is available for the application. From the other hand you still want same variables to be available when you log in via SSH.

    But this post is not about how to properly setup environment. It is about fetching environment variables.

    The problem I’d like to point out is related to the way we are reading environment variables from ENV. In fact you can find that way of obtaining env variables in Rails 4:

    # test/test_helper.rb
    ENV['RAILS_ENV'] ||= 'test'
    
    # config/secrets.yml
    secret_key_base: <%= ENV['SECRET_KEY_BASE'] %>
    
    # config/environments/production.rb
    config.serve_static_files = ENV['RAILS_SERVE_STATIC_FILES'].present?

    The problem is that if we call [] on ENV with key that is missing nil will be returned. That will probably cause weird errors which are far away from the real reason. It might take some time figuring out that error is caused by misspelled or not configured environment variable. Reading environment variables via [] does not allow gracefully handle case when key is missing or set defaults (most probably we’d use ||= to solve that). There is a better way though.

    Use the fetch Luke

    ENV is respoding to fetch similiary as Hash and Array are doing. That way we can set default values or handle - providing a block - gracefully missing keys. Also using fetch with unknown key will raise KeyError that will tell us which exactly key is missing. That is in fact the behavior we are expecting from the app. Without required settings is just not working and complaining about missing setting and not about some random nil references.

    So refining our previous example, the AWS configuration should look as follows:

    AWS.config(
      access_key_id:      ENV.fetch('S3_ACCESS_KEY'),
      secret_access_key:  ENV.fetch('S3_SECRET_KEY'),
      region:             ENV.fetch('S3_REGION')
    )

    The application will tell us which exactly key is missing in case we forgot to set it. Now we can go back figuring out to which file environment variables should go to be available for our application.

  • Testing Inside And Outside Boundaries

    June 04, 2014

    This post is a continuation of my previous post about integration and unit testing practices. I encourage to read previous post first.

    Web Application Architecture

    Traditionally we were using MVC frameworks to implement our web applications. While this pattern worked great in a past it is not a case anymore. The problem is that our applications are fairly complex nowadays. We expose features via the HTTP, via REST/Json API’s, we are communicating with external API’s, sometimes we have more than one storage. The problem of MVC is that it was not designed to tackle growing application complexity.

    In traditional MVC approach your business logic is implemented in models. However while the application is growing models are growing as well. If you won’t tackle the growth of an application you’ll end up with 6000-lines long models with hundreds of callbacks.

    Concerns that were introduced to Rails do not solve the problem. They do not deal with complexity. Instead they just move complexity aside so that we do not see it when we open a model class. But issues remains the same as without using concerns.

    For applications that require tackling growing complexity, we need something more than plain old MVC.

    The Hexagonal Architecture aka Ports and Adapters

    Hexagonal Architecture may help us to tackle growing application complexity. The basic architecture in Rails application context is depicted in a following figure:

    Rails Hexagonal Architecture

    We see that we have 2 boundaries in this architecture. The core is where all domain logic lives. Outside the core, but still within application boundaries, lives all code related to the framework (Rails) and everything that wraps calls to IO, file system, external services even libraries. Everything else is external to the application.

    It is important to note that through each component (port) dependencies goes in a same direction. Controllers depends on the core, but core is not depending on controllers. Core depends on models, but models are not dependent on core, and so on.

    Testing the core

    As said core is all your business logic. I’ve intentionally left that as a placeholder instead of describing what components should core consist of. It is really dependent on the application specific requirements and your fluency in Object Oriented Design. You can apply Domain Driven Design principles, you can create simple service objects. Whatever works given the application domain and requirements.

    Note that things like DDD were designed to tackle growing complexity. If your application is not fairly complex and level of complexity is not rising using DDD is overkill. Do a good judge picking the right tools!

    Because you don’t know the exact architecture of an application core it is perfect place to drive its design by tests. You can easily replace components that do not belong to the core with test doubles and test core independently from framework.

    You may ask how isolation helps testing core logic?. Isolation comes with a burden of mocks and stubs, is it worth the price? I strongly believe that isolation helps significantly (and not just because tests are running faster, that is for free!).

    Firstly if you are isolated from framework you deal only with complexity of a problem that you are solving (we call that essential complexity). You don’t need to deal with additional complexity inherited from the framework (that’s accidental complexity). Well at least other than framework/s used for testing.

    Secondly mocks and stubs gives you greater control over the different circumstances that may happen. It is especially helpful if you want to test different scenarios including connection timeouts, etc. It is much easier to tell a mock to throw an exception instead of trying to simulate similar behavior on a real object. Sometimes you don’t even have access to test or sandbox environments.

    It is quite safe to use mocks and stubs when testing core. Everything that surrounds core belongs to the application, so you are mocking only components that you own. That is very important.

    Those are two great benefits of testing in isolation. And I believe they are worth the price.

    Testing the boundaries

    All that belongs to the application but is not a core I classify as a boundary. Those components are much different than core components therefore they should be designed and tested differently.

    There is no point in driving design of boundaries from tests. The design is already enforced by the framework, API’s, libraries (even standard). There is not much benefit that you can get from test driving that part of an application.

    Also the complexity of those components is not changing. The wrapper for an HTTP client will remain the same no matter which library you will use under the hood (whether it be Faraday, RestClient, or Net::HTTP). If the complexity is not growing and is predictable you will gain no benefit in tackling it.

    There is no point in testing boundaries in isolation. Boundaries are thin components that delegates messages either outside the application or into a core. They unwrap data from structures and wrap to another structures. Not much of a logic to test here. We won’t limit accidental complexity by isolating boundaries. In fact it will amplify its impact as mocking components that you don’t own leads to a very fragile tests. Also frameworks are designed to provide nice DSL’s or API’s which are not grateful when mocking (like ActiveRecord call chains).

    Despite if you will test boundaries in isolation you will need to test it also in integration/acceptance/system tests. You must ensure that not only boundary components are doing what they should, but they are integrating well with outside world and a core. Integration tests will cover everything that you might test in isolation (given the fact that boundaries are simple classes without much logic inside). There is no point in testing same piece of functionality twice.

    Also you can be less restrictive when testing boundaries. While testing core you will check each possible condition that you can imagine and simulate, when testing boundaries you don’t need to remove every possible mandatory parameter from a request to ensure that it will respond with bad request each time. You don’t need to test functionality that is provided by framework. You just need to ensure that it integrates correctly.

    Recap

    When dealing with application that is fairly complex and you predict that the complexity will grow you need something more than plain old MVC. Wrap your core logic and surround it with a shell of components isolating it from outside world. Let those isolating components be logic-less, simple delegators that transforms data structures. Test drive your core design and test it in isolation, you will deal only with essential complexity and will have greater control over different scenarios and conditions. Do not test drive code that does not belong to the core. Its design is driven by frameworks, libraries or protocols and it is not a complexity that you want to deal with. Make your ports simple so that their complexity is at low, stable and predictable level.

  • Unit or integration testing?

    May 15, 2014

    Lately we are experiencing a holy war between TDD followers and those that stands against it. As a strong believer in OOP principles I should consider myself a TDD follower. However I think that the truth, as always, lies somewhere in between. I practice TDD only in a parts of an application, that I think makes sense to drive its design from tests. I’m not trying obsessively write tests first, because sometimes I simply can’t. Also I’m not trying to unit test everything. I believe there are parts that can and should be unit tested and some not. I’m trying to get best from both worlds.

    So I’ve decided to describe my understanding on how and what to test and when to TDD.

    Application borders

    The key to understanding what to test as an unit and what requires integration testing is knowing that each application has its borders. What lies inside the borders you can call a core of an application and what lies outside the borders you can call… well it doesn’t matter how you call it. The point is that the majority of your testing and design efforts should focus on the core of the application. Also it is very important to ensure that all dependencies cross borders in same direction. Dependencies should go from your core to whatever lies beyond application borders.

    The core of an application is all the services, interactors, business objects, however you call it, that implements your domain, business logic. All other stuff like framework, routing, database, external API’s, file system, external libraries, presentation layer are just supporting your core.

    Given that what you should TDD and unit test is all your core logic. And the reason behind that is not tests speed. The tests speed is a benefit that we gain for free when we unit test in isolation. The reason is that when you unit test in isolation you have greater control over all the conditions that may happen on the borders of a system that you mock. It is much easier to simulate certain situations. It is easier to instruct a mock object to return certain response than to instruct external API. Sometimes you don’t even have any sandbox/test environment which you can call in your tests.

    TDD works well in designing your core application logic. Classes are small, decoupled and easy to test. Tests speed is for free.

    In contrast everything that lies on and outside borders should be tested in integration. There is no point to unit test Rails controllers or views. Similarly there is no point to unit test ActiveRecord stuff like scopes. It really doesn’t matter that a scope creates a query that you’ve expected, by calling appropriate ActiveRecord methods, unless you execute that query against the real database and ensure it is valid and returns proper records.

    The point is that you must ensure that your core application logic integrates well with the outside world. You really don’t need to test every single case that may happen, leave that for unit tests if possible. Ensure that everything together works fine and you’ve done with integration tests.

    When application borders blur

    I think this is the cause of all misunderstanding in the whole against TDD story. The problem is that Rails blurred application borders. In fact core logic lies either in controllers or in models. We can say that the framework drives our design.

    It is fine for simple, small to medium sized applications and whenever you need fast prototype. If you are in this situation do not fight against the framework design instead embrace it. However if your business logic is complex and you expect that size of your application will be from medium to large driving application design by the framework is not a good idea. You will die somewhere between 2000 and 6000 line of code of User class with hundred callbacks.

    When application borders are not clearly defined you shouldn’t unit test. Its pointless as mocking all the details will kill you (have you ever mocked chain of ActiveRecord calls?). Driving design by tests is also pointless as framework already drives your design and they will conflict.

    However if your application logic is fairly complex you will need something that wraps the core logic and clearly defines borders. Concerns that were introduced to Rails are not a good answer to that problem in my opinion. Concerns, which are nothing else but mixins, are form of inheritance and you really want to avoid inheritance if possible (and use aggregation instead). Here driving core logic design with TDD is perfectly fine. Everything else doesn’t need TDD-ing.

    Driving design by integration/system tests is pointless. They are too general. Too abstract.

    Where are my borders?

    In typical Rails application controllers are your borders. ActiveRecord models are borders. Everything that calls external API’s and filestem are borders. Calling a class from a gem may be considered as a border, but it is not always a case, so you need good judge in this regard.

    None of the above is particularly good to unit test. And also you cannot drive the design as the framework or the API’s already picked design for you.

    Models should be tested against the database, to ensure that queries works well. Routing and views are best tested via the tools like Capybara. There is no point to mock out HTTP protocol.

    If you call gem’s like Faraday or RestClient, even standard Net::HTTP you really want to wrap those with your own object like HttpClient. The reasoning is that you want to mock (in your core) only classes that you own. If you mock things from the libraries (even standard) you may fall into a situation that all tests are passing, but the actual application doesn’t work. Also your tests will be more fragile to gem/API’s changes.

    In some future posts I’ll describe some techniques that I’m mentioning here.

    Conclusion

    As in every conflict there is no single truth and holy grail. Application core, encapsulated and with clearly defined borders is where you should focus your testing efforts and TDD. Design of code that lies on and beyond the borders doesn’t need to be driven by tests. It is already driven by framework, libraries, API’s etc.

    Don’t be obsessively to not write a single line of production code before the test. From the other hand it is much easier to do with integration/system tests as you write general test scenarios that covers more parts of an application and on a higher lever not caring about the implementation details. General tests do not drive application design though.

    Not all application needs to encapsulate core logic. However it helps significantly in complex applications and when you’ll need to maintain application for a long time. Decision is yours and judge well.

    Do not be afraid of indirections (despite what DHH says). Indirection doesn’t destroy your design. With proper naming. simple, clean classes design is even easier to understand as you don’t need to care about the implementation details. You can even say what the system is doing by scanning the names of modules and classes in your core logic.

    Focus your efforts on things that matters - application core logic. Test and design it well. Ensure that you cross the application borders in a correct way, you don’t need to test every single case for code that is not core for your application. Tests for core already are doing that so don’t repeat yourself.

  • Making Things Loosely Coupled

    April 14, 2014

    Every developer has heard terms like loose or tight coupling yet still a lot have problems maintaining coupling in their codebase. Let’s take a look into some code and try to identify where it is tightly coupled and refactor making it more maintainable and testable.

    Facing a code

    So here is a class:

    class ListsUsbSupportedFiles
      def list_absolute_paths
        Dir.glob(pattern)
      end
    
      def list_relative_paths
        Dir.glob(pattern).map { |path| path.gsub(usb_path, '') }
      end
    
      private
    
      def pattern
        "#{usb_path}/**/**#{supported_file_types}"
      end
    
      def supported_file_types
        @file_types ||= "{#{Document::SUPPORTED_FILE_TYPES.join(',')}}"
      end
    
      def usb_path
        @usb_path ||= UsbKey.new.path
      end
    end

    A purpose of this class is simple - it should be able to list relative or absolute paths to all files that are stored in a USB drive and are of appropriate, supported file type. Now pause for a moment and take a look onto that class and try to identify all places where it is tightly coupled.

    Problems are rising

    The easiest way to identify coupling is finding all references to other classes. Each time you’ve encounter reference to other class you should ask yourself: is this class really needs to know all that about that other class?.

    Our example references following classes: Dir, Document and UsbKey. Let’s try answering above question for each of them:

    • Dir - it is a class from a Ruby’s standard library. There is in fact nothing wrong referencing classes from the std-lib and this kind of coupling can be safely left and refactored only if there is a good reason behind that.
    • Document - that class provides us a list of supported files. But do we really need to know that this list is in a Document class?
    • UsbKey - this class is used to get a path to a directory where USB is mounted. But do we really need to instantiate that class to just invoke one method on it?

    So we can say that coupling to classes Document and UsbKey is tight. But there is one more subtle kind of coupling in this class.

    Huston now we have a real problem

    You can argue that referencing to Document and UsbKey classes is a big problem. Maybe, but let me ask you something. How you would write a test for that class? Think about it for a while. How you would test if this class correctly lists file paths?

    Current implementation is not only tightly coupled with UsbKey class. It is tightly coupled with a USB itself. You will need to have an USB stick plugged into your machine to make tests passing! Of course you could try mocking and it will work but let’s check how it might look:

    UsbKey = double unless defined?(UsbKey)
    
    allow(UsbKey).to receive(:new).and_return(usb_key)
    allow(usb_key).to receive(:path).and_return(path)

    It will work. But just because it works it doesn’t mean it is good. Firstly this amount of mocking for such a simple class should be already suspicious. Secondly what this code is really doing is mocking class internals and you shouldn’t care about class internals. Burn it into your head: never, ever mock object internals. Never. Just don’t do it. Everybody will be more happy. By internals you should consider all private methods, state and all collaborators which are not injected into that class. If you ever need to mock one of these in order to make code testable it means that your design is wrong.

    If you are more interested in why you shouldn’t mock object’s internal state (also called implementation detail) check Ian Cooper’s great presentation.

    Decoupling for the win

    So lets try to refactor that class this time doing it right. Let’s start with writing some tests:

    describe File::FileList do
      describe '#absolute_paths' do
        it 'returns absolute paths to files'
      end
    
      describe '#relative_paths' do
        it "returns paths realtively to list's root path"
      end
    end

    We are expecting a class File::FileList to provide 2 methods one to return absolute second to return relative paths. Both should include only paths to supported files however I’ve skipped that for simplicity and in fact current specs will cover that. In production we could add appropriate examples for documentation purposes.

    We need to setup some directory and files structure as a fixture:

    spec/fixtures/lib/file/file_list_spec/
        subdir/
            file2.rb
            file3.py
            ignore2.exe
        file1.rb
        ignore1.exe
    

    Now we can imlement example for #absolute_paths method:

    ROOT_PATH = File.join('spec', 'fixtures', 'lib', 'file', 'file_list_spec')
    ABSOLUTE_ROOT_PATH = File.expand_path(ROOT_PATH)
    
    subject { File::FileList.new(ABSOLUTE_ROOT_PATH, includes: %w(rb py)) }
    
    describe '#absolute_paths' do
      let(:absolute_paths) do
        [
          File.join(ABSOLUTE_ROOT_PATH, 'file1.rb'),
          File.join(ABSOLUTE_ROOT_PATH, 'subdir', 'file2.rb'),
          File.join(ABSOLUTE_ROOT_PATH, 'subdir', 'file3.py'),
        ]
      end
    
      it 'returns absolute paths to files' do
        expect(subject.absolute_paths).to match_array absolute_paths
      end
    end

    Time to move some code to our new class:

    class File::FileList
      def initialize(root_path, opts = {})
        @root_path = root_path
        @includes  = opts[:includes].join(',') if opts[:includes]
      end
    
      def absolute_paths
        Dir.glob(pattern)
      end
    
      private
    
      attr_reader :root_path, :includes
    
      def pattern
        "#{root_path}/**/**#{supported_files}"
      end
    
      def supported_files
        "{#{includes}}"
      end
    end

    I’ve introduced a list of supported files via optional hash parameter (in ruby 2.x we would use keyword arguments) that should be tested in separate context, but I’m not gonna to do that in this post.

    Given code passes the specs so we can implement example for #relative_paths:

    describe '#relative_paths' do
      let(:relative_paths) do
        [
          File.join('file1.rb'),
          File.join('subdir', 'file2.rb'),
          File.join('subdir', 'file3.py'),
        ]
      end
    
      it "returns paths realtively to list's root path" do
        expect(subject.relative_paths).to match_array relative_paths
      end
    end

    Now we can move remaining code to a new class:

    class File::FileList
    
      # ...
    
      def relative_paths
        Dir.glob(pattern).map do |path|
          path.gsub("#{root_path}/", '')
        end
      end
    
      # ...
    
    end

    And we’re done!

    Conclusion

    Let’s summarize work we’ve done in the refactoring:

    • We’ve removed coupling to Document class by providing a list of supported files in a constructor.
    • We’ve removed coupling to UsbKey class by providing a path in a constructor.
    • We’ve removed coupling to USB mount location making a new class more generic and potentially useful in other cases.

    But most importantly by making a class decoupled we made it perfectly testable without a need to mock any of its internals. Not only the class is more useful but the code is more maintainable now. And thats the real profit of making objects loosely coupled.

    Each time you are instantiating a class within some other class think do you really need to know that object. If there is only one particular information you need from it or you want to call some method but you are not interested on object’s state, probably you can achieve the same by injecting that object and decoupling things. This way you could test classes with ease. Limiting collaborators is a great and simple technique to achieve testable and maintainable code without much hussle.

  • Run Rubocop Against Modified Files

    December 16, 2013

    If you are using RuboCop and like me you are working with some legacy codebase you may be interested in running this tool only against the files you’ve modified. Following snippet will do exactly that:

    git status --porcelain | cut -c4- | grep '.rb' | xargs rubocop

    More convenient script may be found here.

  • Fix Ubuntu Freeze During Restart

    October 27, 2013

    Solutions for this issue can be found on the internet, but this will work basically as a reminder for myself.

    The reasons for freeze during restarts could be some but in most cases it is caused because Linux (kernel) doesn’t know how to perform restart given the BIOS that is present on the machine. Fortunately kernel has a few methods to perform restarts from which we can choose one that will work for our PC. I was experiencing the problem on DELL Latitude with Ubuntu 13.10 system.

    So to fix the issue we need to pass reboot parameter to the kernel at the boot time. To do so in Ubuntu we can edit /etc/default/grub file. We need to search for the following line:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

    Here we can provide command line options that will be passed to kernel during boot. In my case following fixed the issue:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash reboot=pci"

    It looks that option reboot=pci in general solves the issue for DELL laptops.

    After you change the option you need to execute:

    sudo update-grub

    And reboot machine in order to make provided options to take effect. You can check that by executing:

    $ cat /proc/cmdline
    BOOT_IMAGE=/boot/vmlinuz-3.11.0-12-generic root=UUID=b5fd9a8a-4675-4277-9843-56a5c44fefb4 ro quiet splash reboot=pci

    Once you ensure that linux was booted with correct reboot option you can test if reboot is working.

    Other options that can be passed are as follows:

    • warm - don’t set the cold reboot flag
    • cold - set the cold reboot flag
    • bios - reboot by jumping through the BIOS (only for X86_32)
    • smp (reboot by executing reset on BSP or other CPU - only for X86_32)
    • triple - force a triple fault - init
    • kbd - use the keyboard controller. cold reset (default)
    • acpi - use the RESET_REG in the FADT
    • efi - use efi reset_system runtime service
    • pci - use the so-called “PCI reset register”, CF9
    • force - avoid anything that could hang

    In most cases bios, acpi or pci will fix the problem.

    You can pass multiple parameters at the same time and let Linux kernel to try them in order specified. So if you want to check if any of the parameters will fix the issue try following:

    GRUB_CMDLINE_LINUX_DEFAULT="quiet splash reboot=warm,cold,bios,smp,triple,kbd,acpi,efi,pci,force"

    If this solves the restart issue you can binary search for the exact option that will work for your PC.

  • ← Older Newer →