Profiling the AFCON teams: DR Congo

first_img Tags: AFCON 2019DR CongoJean-Florent Ikwange IbengétopYannick Bolasie DRC has featured in 18 AFCON finals. (PHOTOS/AGENCIES)DR Congo will be featuring in their 19th Africa Cup of Nations (AFCON) tournament and their 4th in a raw since 2013. They have won the tournament on two occasions (1968 and 1974).In the 2019 edition, DR Congo is in Group A alongside hosts Egypt, Uganda and Zimbabwe.DRC did not take part in the first four AFCON tournaments as they never participated in the qualifiers.Their first finals’ tournament was the one in 1965 in Tunisia. They were placed in Group B alongside Ghana and Ivory Coast, finishing bottom of the pile with two losses and hence eliminated.In the next edition in 1968, DR Congo would put its 1965 misery and win their maiden AFCON tournament. Finishing second in Group B, DRC defeated Ethiopia 3-2 in extra time in the semis and went on to scrap past Ghana 1-0 in the final.Entering the 1970 edition as Champions, DR Congo were eliminated from the competition in humiliating fashion, finishing bottom of Group B with one point from three games.In Cameroon 72, DR Congo ensured they bettered their past tournament’s result as they topped Group A which included Congo, Morocco and Sudan. However, the dream of a second championship was ended by Mali who defeated them 4-3 in the semifinals. DR Congo would eventually finish 4th in the competition following a 5-2 loss to Cameroon in the third place playoff game.Their second title would eventually be sealed in the 1974 edition in Egypt. DR Congo finished second of Group B behind Congo to set up a semifinal clash with the hosts. DRC took care of the Egyptians 3-2 and went on to seal the title thanks to a 2-0 victory over Zambia in the final’s replay after the two sides had drawn 2-2 in the first game.DRC sealed their second AFCON victory in the 1974 edition.In the 1976 edition in Ethiopia, for the second time DRC failed to defend their title as they exited the tournament at the group stage.For the next three editions, they did not take part as they turned down a chance to qualify for the 78 edition and failed to qualify for that of 1980 and 82. The trend continued for the next two editions in 84 and.DRC’s next appearance at the finals was in Morocco, 88. Their tournament ended early as they exited at the Group stages after finishing bottom of Group B.In 1990, DRC failed to qualify for the finals tournament but made amends in 92 a they finished second in group B, only to lose 1-0 to Nigeria in the quarter finals.The trend of finishing in the quarters continued in both the 94 edition and that of South Africa, 96.In 98, they would reach the last four for the first time since their second championship in 1974. After finishing second in Group B, DRC went on to defeat Cameroon 1-0 in the quarter finals. In the semis, they lost 2-1 to South Africa before sealing third spot following a penalty shoot-out victory over Burkina Faso.In the 2000 edition, DRC once again failed to get past the group stages as they finished third in Group B that featured South Africa, Algeria and Gabon.DRC would however go one better in the 2002 edition in Mali. They qualified from Group C as runners ups to Cameroon before losing 2-0 to eventual finalists Senegal in the quarter finals.DRC has been eliminated at the quarter final stage on 6 different occasions.2004 was another miserable edition for DRC as they finished bottom of Group A, having lost all their three games.In 2006, they once again reached the quarter once again, the first time since 2002. They finished second in group B behind Cameroon. They then went on to lose 4-1 in the last 8 to eventual Champions Egypt.For the next three editions in 08, 10 and 12, DRC failed to qualify for the AFCON finals.Their next participation at the finals came in 2013 in South Africa. However, DRC was eliminated at the Group stages after finishing third in Group B, behind Mali and Ghana.For the 2015 edition in Equatorial Guinea, DRC would finish third, their best finish at the finals since 1998. In the Group, the finished second behind Tunisia in Group B. At the last 8, they defeated Congo 4-1 and set up a semifinal with Ivory Coast. However, they lost 3-1 and then defeated the hosts on penalties to finish third.The 2017 edition was one that once again saw DRC exit the competition at the quarter final stage. After topping Group A with 7 points, they would however lose 1-2 to Ghana in the last 8.CoachJean-Florent Ikwange Ibengé Ibenge is a 57 Congolese tactician who was handed the DRC National team head coach role in 2014. He was born in December 1961.He was manager of Chinese club Shanghai Shenhua from April to May 2012, and of Congolese team Vita Club from February 2014.He became manager of the DRC in August 2014, combining this role with his job at Vita Club.Ibanga has been DRC’s head coach since 2014.Star PlayerYannick Bolasie  Bolasie is a professional footballer who plays as a winger Anderlecht on loan from Premier League Club Everton.The 30 year old has managed over 350 appearances for 9 different clubs for whom he has scored 34 goals. For DRC, he has managed 33 appearances since making his debut in 2013, scoring 9 goals.He is a player blessed with blistering pace and can hurt any defender on a given day. He is one of the players expected to light-up the 2019 AFCON.Bolasie made his debut for DRC in 2013.Projection: Quarter finalsEgypt Fixtures at AFON 2019-Uganda vs DR Congo, 21st June-DR Congo vs Egypt, 26th June-DR Congo vs Zimbabwe, 30th JuneComments last_img read more

Test managers are under pressure to test faster an

Test managers are under pressure to test faster an

first_imgTest managers are under pressure to test faster and deliver software with fewer defects. Improving these two factors — velocity and defect detection effectiveness — requires a balanced mix of people, processes and tools. Orchestrated properly, anyone can increase a team’s testing delivery speed and trap more defects before the final software release.A multitude of supporting key performance indicators (KPIs) will help you achieve velocity and DDE. By meeting and exceeding the KPIs listed below, you’ll be inching your QA organization towards greater efficiency and optimization. More QA testers won’t solve your problems. Often times, even automation is not the silver bullet, because it can introduce more overhead and maintenance than necessary, along with long-term costs. The answer you’re looking for is in the data.Nailing down your philosophy on QA scorecards and KPI monitoring is the key to unlocking the full potential of your QA organization. Here are 12 KPIs to track:Active defects: Tracking active defects is a simple KPI that you should be monitoring regardless. The active defects KPI is better when the values are lower. Every software IT project comes with its fair share of defects. Depending on the magnitude and complexity of the project, there may be 250 or more defects active at any given time. The word “active” for this KPI could mean the status is either new, open or fixed (and waiting for re-test). If the defect is getting “worked,” then it’s active. Set the threshold based on historical data of your IT projects. Whether that’s 100 defects, 50 defects or 25 defects – the threshold will determine when it is OK. Anything above the threshold you set is “Not OK” and should be flagged for immediate action.Authored tests: This KPI is important for test managers because it helps them monitor the test design activity of business analysts and testing engineers. As new requirements are written, develop associated system tests and decide whether those test cases should be flagged for the regression test suite. Is the test that your test engineer is writing going to cover a critical piece of functionality in your Application Under Testing (AUT)? If yes, then flag it for your regression testing suite and slot it for automation. If no, then add it to the bucket of manual tests that can be executed ad hoc when necessary. Track the “authored tests” in relation to the number of requirements for a given IT project. In other words, if you subscribe to the philosophy that every requirement should have test coverage (i.e. an associated test), then set the threshold for this KPI to equal the number of requirements or user stories outlined for a sprint. That would equate to one test case for every requirement in “Ready” status.Automated tests: This KPI is challenging to track. Generally speaking, the more automated tests in place – the more likely it is that you’ll trap critical defects introduced to your software delivery stream. Start small and adjust upwards as your QA team evolves and matures. Set a threshold that 20 percent of test cases should be automated. Covered requirements: Track the percentage of requirements covered by at least one test. One hundred percent test coverage should be the goal. The validity of a requirement hinges on whether a test exists to prove whether it works. The same holds true for a test that lives in your test plan. The validity of that test hinges upon whether it was designed to test out a requirement. If it’s not traced back up to a requirement, why do you need the test? Monitor this KPI every day and then question the value of orphaned requirements and orphaned tests. If they are orphaned, find them a home by tracing them to a specific requirement.Defects fixed per day: Don’t lose sight of how efficiently development counterparts are working to rectify the defects brought to their attention. The defects fixed per day KPI ensures that the development team is hitting the “standard” when it comes to turning around fixes and keeping the build moving forward.Passed requirements: Measuring passed requirements is an effective method of taking the pulse on a given testing cycle. It is also a good measure to consider during a Go/No-Go meeting for a large release.Passed tests: Sometimes you need to look beyond the requirements level and peer into the execution of every test configuration within a test. A test configuration is basically a permeation of a test case that inputs different data values. The passed tests KPI is complementary to your passed requirements KPI and helps you understand how effective test configurations are in trapping defects. You can be quickly fooled into thinking you have a quality build on your hands with this KPI if you don’t have a good handle on the test design process. Low quality test cases often yield passing results when in fact there are still issues with the build. Ensure that your team is diligent in exercising different branches of logic when designing test cases and this KPI will be of more value.Rejected defects: The rejected defects KPI is known for its ability to identify a training opportunity for software testing engineers. If your development team is rejecting a high number of defects with a comment like “works as designed,” take your team through the design documentation of the application under test. No more than five percent of submitted defects should be rejected.Reviewed requirements: The reviewed requirements KPI is more of a “prevention KPI” rather than a “detection KPI. This KPI focuses on identifying which requirements (or user stories) have been reviewed for ambiguity. Ambiguous requirements lead to bad design decisions and ultimately wasted resources. Monitor whether each of the requirements has been reviewed by a subject matter expert who truly understands the business process that the technology is supporting.Severe defects: This is a great KPI to monitor, but make certain that your team employs checks and balances when setting the severity of a defect. After you ensure the necessary checks and balances are in place, set a threshold for this KPI. If a defect status is Urgent or Very High, count it against this KPI. If the total count exceeds 10, throw a red flag.Test instances executed: This KPI only relates to the velocity of your test execution plan. It doesn’t provide insight into the quality of your build, instead shedding light on the percentage of total instances available in a test set. Monitor this KPI along with a test execution burn down chart to gauge whether additional testers may be required for projects with a large manual testing focus.Tests executed:  This shouldn’t be your only tool to monitor velocity during a given sprint or test execution cycle. Pay close attention to the KPIs described above. This KPI is more a velocity KPI, whereas some outlined above help monitor “preventative measures” while comparing them to “detection measures.”last_img read more