Suppose we have a new functionality, which implements a shopping cart, which we want to test. There are two big families of test methodologies, that we can use. The automated and the non-automated ones. And that was the most popular way to test software for a very long time. But as new and exciting test frameworks were being created, people have started shifting their attention towards automated test methodologies.
The automated way requires the developer to write some bytes of code, which test the business logic. Given the shopping cart, in the previous paragraph, instead of clicking here and there, we would use code to make this happen automatically. It is like test scenarios executed by the machine, instead of a human.
In the example code above, I have written a small utility which does few basic operations on the filesystem. In the function above, we want to create a new file. In this function, we provide the full path of the target file in two pieces, the directory it will live in and its filename. If we satisfy both, we create the file. All you need to know is, how it is called and what is the return type, in order to force the desired return value. Whether or not it works is dependent on how the urlretrieve function behaves, which is dependent on external factors.
And then in your unit test you can create a custom function and check that is called the correct amount of times and with the correct parameters. You can use the library Mock to simplify the part of creating your own retrieve function for the unit test. Basically, if you need to test how your logic responds to the behaviour of the urlretrieve function, you need to inject a simulated behaviour into your program flow. One way of doing this is to wrap the urllib functionality in a module or class that generates this behaviour when the unit test is run.
For example, if your download functionality is in its own module, you can do something like this simplified pseudo-code :. Stack Overflow for Teams — Collaborate and share knowledge with a private group. Create a free Team What is Teams? Collectives on Stack Overflow. Learn more. How to unit test a downloading function?
Ask Question. Asked 8 years, 4 months ago. Active 8 years, 4 months ago. Viewed 2k times. I have a python function that downloads a few files. Subclasses can restore that behavior by overriding TestSuite. In the typical usage of a TestSuite object, the run method is invoked by a TestRunner rather than by the end-user test harness.
The TestLoader class is used to create test suites from classes and modules. Normally, there is no need to create an instance of this class; the unittest module provides an instance that can be shared as unittest.
Using a subclass or instance, however, allows customization of some configurable properties. TestLoader objects have the following attributes:.
A list of the non-fatal errors encountered while loading tests. Not reset by the loader at any point. Fatal errors are signalled by the relevant a method raising an exception to the caller. Non-fatal errors are also indicated by a synthetic test that will raise the original error when run. TestLoader objects have the following methods:. Return a suite of all test cases contained in the TestCase -derived testCaseClass.
A test case instance is created for each method named by getTestCaseNames. By default these are the method names beginning with test. If getTestCaseNames returns no methods, but the runTest method is implemented, a single test case is created for that method instead. Return a suite of all test cases contained in the given module. This method searches module for classes derived from TestCase and creates an instance of the class for each test method defined for the class.
While using a hierarchy of TestCase -derived classes can be convenient in sharing fixtures and helper functions, defining test methods on base classes that are not intended to be instantiated directly does not play well with this method.
Doing so, however, can be useful when the fixtures are different and defined in subclasses. This allows modules to customize test loading.
SampleTestCase' would cause this method to return a suite which will run all three test methods. Using the specifier 'SampleTests.
The specifier can refer to modules and packages which have not been imported; they will be imported as a side-effect. These errors are included in the errors accumulated by self. Similar to loadTestsFromName , but takes a sequence of names rather than a single name. The return value is a test suite which supports all the tests defined for each name. Return a sorted sequence of method names found within testCaseClass ; this should be a subclass of TestCase. Find all the test modules by recursing into subdirectories from the specified start directory, and return a TestSuite object containing them.
Only test files that match pattern will be loaded. Using shell style pattern matching. Only module names that are importable i. All test modules must be importable from the top level of the project. If the start directory is not the top level directory then the top level directory must be specified separately.
If importing a module fails, for example due to a syntax error, then this will be recorded as a single error and discovery will continue. If the import failure is due to SkipTest being raised, it will be recorded as a skip instead of an error. If this exists then it will be called package. The pattern is deliberately not stored as a loader attribute so that packages can continue discovery themselves.
The following attributes of a TestLoader can be configured either by subclassing or assignment on an instance:. String giving the prefix of method names which will be interpreted as test methods. The default value is 'test'. Callable object that constructs a test suite from a list of tests. No methods on the resulting object are needed. The default value is the TestSuite class.
List of Unix shell-style wildcard test name patterns that test methods have to match to be included in test suites see -v option.
If this attribute is not None the default , all test methods to be included in test suites must match one of the patterns in this list.
Note that matches are always performed using fnmatch. This class is used to compile information about which tests have succeeded and which have failed. A TestResult object stores the results of a set of tests. The TestCase and TestSuite classes ensure that results are properly recorded; test authors do not need to worry about recording the outcome of tests. Testing frameworks built on top of unittest may want access to the TestResult object generated by running a set of tests for reporting purposes; a TestResult instance is returned by the TestRunner.
TestResult instances have the following attributes that will be of interest when inspecting the results of running a set of tests:. A list containing 2-tuples of TestCase instances and strings holding formatted tracebacks.
Each tuple represents a test which raised an unexpected exception. Each tuple represents a test where a failure was explicitly signalled using the TestCase. A list containing 2-tuples of TestCase instances and strings holding the reason for skipping the test. Each tuple represents an expected failure or error of the test case.
A list containing TestCase instances that were marked as expected failures, but succeeded. Set to True when the execution of tests should stop by stop.
If set to true, sys. Collected output will only be echoed onto the real sys. If set to true stop will be called on the first failure or error, halting the test run.
Return True if all tests run so far have passed, otherwise returns False. This method can be called to signal that the set of tests being run should be aborted by setting the shouldStop attribute to True. TestRunner objects should respect this flag and return without running any additional tests. For example, this feature is used by the TextTestRunner class to stop the test framework when the user signals an interrupt from the keyboard.
Interactive tools which provide TestRunner implementations can use this in a similar manner. The following methods of the TestResult class are used to maintain the internal data structures, and may be extended in subclasses to support additional reporting requirements. This is particularly useful in building tools which support interactive reporting while tests are being run. Called when the test case test raises an unexpected exception.
Called when the test case test signals a failure. Called when the test case test is skipped. Called when the test case test fails or errors, but was marked with the expectedFailure decorator. Called when the test case test was marked with the expectedFailure decorator, but succeeded.
Called when a subtest finishes. If outcome is None , the subtest succeeded. Otherwise, it failed with an exception where outcome is a tuple of the form returned by sys. The default implementation does nothing when the outcome is a success, and records subtest failures as normal failures.
The old name still exists as an alias but is deprecated. Instance of the TestLoader class intended to be shared. If no customization of the TestLoader is needed, this instance can be used instead of repeatedly creating new instances. A basic test runner implementation that outputs results to a stream. If stream is None , the default, sys. This class has a few configurable parameters, but is essentially very simple. Graphical applications which run test suites should provide alternate implementations.
Deprecation warnings caused by deprecated unittest methods are also special-cased and, when the warning filters are 'default' or 'always' , they will appear only once per-module, in order to avoid too many warning messages.
This method returns the instance of TestResult used by run. It is not intended to be called directly, but can be overridden in subclasses to provide a custom TestResult. It defaults to TextTestResult if no resultclass is provided. The result class is instantiated with the following arguments:. This method is the main public interface to the TextTestRunner.
This method takes a TestSuite or TestCase instance. A command-line program that loads a set of tests from module and runs them; this is primarily for making test modules conveniently executable. The simplest use for this function is to include the following line at the end of a test script:. The defaultTest argument is either the name of a single test or an iterable of test names to run if no test names are specified via argv. If not specified or None and no test names are provided via argv , all tests found in module are run.
The argv argument can be a list of options passed to the program, with the first element being the program name. If not specified or None , the values of sys. The testRunner argument can either be a test runner class or an already created instance of it. By default main calls sys. This displays the result on standard output without calling sys. The failfast , catchbreak and buffer parameters have the same effect as the same-name command-line options.
The warnings argument specifies the warning filter that should be used while running the tests. Calling main actually returns an instance of the TestProgram class. This stores the result of the tests run as the result attribute. It defaults to None. It should return a TestSuite.
It is common for test modules to only want to add or remove tests from the standard set of tests. The third argument is used when loading packages as part of test discovery. If discovery is started in a directory containing a package, either from the command line or by calling TestLoader. If that function does not exist, discovery will recurse into the package as though it were just another directory.
This should return a TestSuite representing all the tests from the package. Class and module level fixtures are implemented in TestSuite. When the test suite encounters a test from a new class then tearDownClass from the previous class if there is one is called, followed by setUpClass from the new class. Similarly if a test is from a different module from the previous test then tearDownModule from the previous module is run, followed by setUpModule from the new module.
After all the tests have run the final tearDownClass and tearDownModule are run. Note that shared fixtures do not play well with [potential] features like test parallelization and they break test isolation.
They should be used with care. The default ordering of tests created by the unittest test loaders is to group all tests from the same modules and classes together. If you randomize the order, so that tests from different modules and classes are adjacent to each other, then these shared fixture functions may be called multiple times in a single test run.
Shared fixtures are not intended to work with suites with non-standard ordering. If there are any exceptions raised during one of the shared fixture functions the test is reported as an error.
If you want the setUpClass and tearDownClass on base classes called then you must call up to them yourself. The implementations in TestCase are empty. If an exception is raised during a setUpClass then the tests in the class are not run and the tearDownClass is not run. If the exception is a SkipTest exception then the class will be reported as having been skipped instead of as an error.
If an exception is raised in a setUpModule then none of the tests in the module will be run and the tearDownModule will not be run. If the exception is a SkipTest exception then the module will be reported as having been skipped instead of as an error. To add cleanup code that must be run even in the case of an exception, use addModuleCleanup :. Add a function to be called after tearDownModule to cleanup resources used during the test class.
They are called with any arguments and keyword arguments passed into addModuleCleanup when they are added. If setUpModule fails, meaning that tearDownModule is not called, then any cleanup functions added will still be called. This function is called unconditionally after tearDownModule , or after setUpModule if setUpModule raises an exception.
It is responsible for calling all the cleanup functions added by addCleanupModule. If you need cleanup functions to be called prior to tearDownModule then you can call doModuleCleanups yourself.
With catch break behavior enabled control-C will allow the currently running test to complete, and the test run will then end and report all the results so far. A second control-c will raise a KeyboardInterrupt in the usual way.
The control-c handling signal handler attempts to remain compatible with code or tests that install their own signal. This will normally be the expected behavior by code that replaces an installed handler and delegates to it.
For individual tests that need unittest control-c handling disabled the removeHandler decorator can be used. There are a few utility functions for framework authors to enable control-c handling functionality within test frameworks.
Install the control-c handler. When a signal. SIGINT is received usually in response to the user pressing control-c all registered results have stop called. Register a TestResult object for control-c handling. Registering a TestResult object has no side-effects if control-c handling is not enabled, so test frameworks can unconditionally register all results they create independently of whether or not handling is enabled.
Remove a registered result. Once a result has been removed then stop will no longer be called on that result object in response to a control-c. When called without arguments this function removes the control-c handler if it has been installed.
This function can also be used as a test decorator to temporarily remove the handler while the test is being executed:.
0コメント