Recent orders

CMIT-425-Discussion-Questions

Week 1 DQ 1

Each of us have own reasons for pursuing our CISSP certification.  Why did you choose to pursue yours?

The reason I am looking to get my CISSP is for job security reasons and the fact that will get me in a Technical 3 level when it comes to being compliment with DOD’s 8570. I also would love to have a fun job catch hackers with the F.B.I. so the CISSP will put me in the running to get one of those jobs. I also think there is a good pay raise that comes when you have a cert of this level. I was also looking into the CASP because it seem a little bit easier.

Week 1 DQ 3

After reading/viewing this week’s materials, please respond to one or more of the following questions.

After viewing the video on IT Governance, describe the IT governance model and discuss its importance in instituting a comprehensive security program. What are security blueprints?

In your own words, describe the personnel best practices of mandatory vacation, separation of powers, principle of least privilege, and job rotation. Give an example of where you have seen these practices applied from your own experience.

Mandatory Vacation is when upper management has to make an employee take a few days off this. This is done for auditing purposes. If the person works and don’t take any time off they could be doing things on the system that people are unaware of and the security team might need some time to examine their system to make sure everything is copasetic.

Separation of powers or what is called separation of duty is used to compartmentalize a job or an organization. This is used to make sure one person is not a single point of failure or that one person does not have too much power. “Designed to prevent error and fraud by ensuring that at least two individuals are responsible for the separate parts of any task. (Wigmore, 2014) ”

Principle of least privilege is to make sure that everyone only has access to what they need and have the lowest access control to folders and files and places. Most state the rule for least privilege is to deny everything and then as a person needs access start opening up rights. “If all processes ran with the smallest set of privileges needed to perform the user’s tasks. (Merrifield, 2014)” So the first step in hardening an account is to deny all.

Job rotation is used to make sure people don’t get to relaxed in their jobs so every so often they have you do another job this is kind of a way for companies to use a checks and balance system. That way if you are doing anything wrong in your job the other person will see it and maybe report it to upper management. “Job rotation is an operational control to detect errors and frauds. (Kokcha, 2012) ”

In my day to day life I have never had a madatory vacation because I take off a good amount of time every year. I have created user accounts before at an ole job so I totally get the process of least privlege. When creating an account they tell us to lock down the account and have the users TASO tell you what that person should have access to, I would stat that most of these accounts where on a role based system.

 

Works Cited

Kokcha, R. (2012, 05 16). Job Rotation. Retrieved from http://security.koenig-solutions.com: http://security.koenig-solutions.com/blog-home/job-rotation

Merrifield, J. (2014, 10). Using a Least-Privileged User Account . Retrieved from http://technet.microsoft.com: http://technet.microsoft.com/en-us/library/cc700846.aspx

Wigmore, I. (2014, 01 01). segregation of duties (SoD). Retrieved from http://whatis.techtarget.com/: http://whatis.techtarget.com/definition/segregation-of-duties-SoD

Week 2 DQ 1

After reading/viewing this week’s materials, please respond to one or more of the following questions.

What are the different Access Control Models available to secure access to resources? Give an example of one that you have used in a work situation or if that is not possible, one that you’ve read about.

Identify the Access Control Categories and give an example of one that you have read about or have knowledge of from your own experience.

Describe threats to the Access Control domain from what was covered within the reading and give an example of each.

What are the main goals of access control and what are the best practices recommended to help in achieving them.

What are the different Access Control Models available to secure access to resources? Give an example of one that you have used in a work situation or if that is not possible, one that you’ve read about.

RBAC which is also known as Role Based Access Control – This access control gives people access based on their role in the organization. An example of that is let’s say the base commander was leaving and there was a new one coming in you would mirror the new base commander’s access to the old one. I have had to do then when creating accounts in AD and group email accounts.

DAC which is also known as Discretionary Access Control- This access control restricts access to data by placing users in different groups and giving the group access to parts of the network. Also there are data owners in the group who can change the level of access each person in the group has. An example is when someone gives another person access to their outlook email account and the owner of the account can dictate weather they won’t the person to have rights to send on the behalf of the email account.

MAC which is also known as Mandatory Access control – This access control method gives the data a sensitivity labels or classification and if the users does not have the classification level they are denied access to the data. “Is a system-controlled policy restricting access to resource objects (such as data files, devices, systems, etc.) based on the level of authorization or clearance of the accessing entity, be it person, process, or device. (Rouse, 2008)

Works Cited

Rouse, M. (2008, 12). mandatory access control (MAC). Retrieved from http://searchsecurity.techtarget.com: http://searchsecurity.techtarget.com/definition/mandatory-access-control-MAC

Week 2 DQ 2

After reading/viewing this week’s materials, please respond to one or more of the following questions.

What are the challenges that an Identity and Access Management system helps overcome? What benefits does it provide?

In your own words describe the four main activities that comprise the System Access Control Process. What guidelines must be followed within the Identification phase?

Identify the Information and Access Management Technologies and describe one that you are familiar with either from your own experience or give an example of one that you’ve read about.

Describe the three factors that can be used in authentication and give at least two examples for each.

Describe the three factors that can be used in authentication and give at least two examples for each.

The Three factors of authentication are something you know, something you have and something that you are.

Most networks have some type of authentication process for user’s login this is to make sure the user has the correct access to the objects that they need and also this is used for Identification purposes also.

One way to sign into the network is with a user name and pin. This is the least secure method because there are serval ways a hacker can gain access to a user name and a password. They could use social networks and guess what the password might be or they can you things like dictionary attacks or brute force to crack the password. This method is also called something you know.

Another authentication is something you have this is a little bit more secure then something you know cause you have to physically get something that the users has such as a token or a smart card. I use a multi factor log in method at my work place and we need to have a CAC to log into the network along with a pin. Ways that people can get around this is by taken the token but or duplicating the smart chip in the CAC but these ways are much harder to do.

The best type of single authentication would be something you are. These are things like” Biometric methods provide the something you are factor of authentication. Some of the biometric methods that can be used are fingerprints, hand geometry, retinal or iris scans, handwriting, and voice analysis. Fingerprints and handprints are the most widely used biometric method in use today. (Gibson, 2011)” I worked at a help desk where the walk ups could come and reset there biometric log in or change the method of login in this was very interesting the processes and why the scanner works. This method is a hard way to gain access but it is not impossible.  

 

Works Cited

Gibson, D. (2011, jUN 6). Understanding the Three Factors of Authentication. Retrieved from http://www.pearsonitcertification.com: http://www.pearsonitcertification.com/articles/article.aspx?p=1718488

Week 3 DQ 1

After reading/viewing this week’s materials, please respond to one or more of the following questions.

Identify the malicious threat sources to physical security and their corresponding countermeasures.

Describe the main components of a CCTV system. What are some of the concerns with CCTV deployments?

Describe three perimeter intrusion detection systems from the physical security domain and give an example of one that you have seen deployed either at work or another location that you are familiar with.

The main components of a CCTV system consist of cameras, transmitters, receivers, a recording system, and a monitor.  The camera captures the data, transmits to the recording system, and then displays on the monitor. One of the concerns with the deployment of the CCTV system include the circuit not being tamperproof whereby this would allow attackers to compromises the companies CCTV system this is a problem that compromises the devices integrity and manipulating the video feed to play back recordings from another recording timeframe. Also depending on the system the feed could be easily hi jacked. Also vandalism could be another problem it CCTV faces the camera is behind a harden plastic cover but if someone mess up that cover it is hard to see though.  Another concern would be choosing the correct lens. The lens should have the proper focal length that covers the entire area or depth of focus, and having the capability to adjust the lens. Light is another concern with the CCTV system, deploying a light-sensitive camera which “allows for the capture of extraordinary detail of objects and precise presentation.” (Harris, 2013) Using the use of an auto iris lens can regulate the amount of light that enters the lens.

Reference

Harris, S. (2013). Alll-in-One CISSP. New York: McGraw-Hill.

Week 3 DQ 2

After reading/viewing this week’s materials, please respond to one or more of the following questions.

Describe the functions of hubs/repeaters, bridges, switches, routers, and gateways. At what layers of the OSI model does each device operate?

Describe the different Wireless standards within the 802.11 family. What is a rogue access point, and what do we have to worry about?

Describe the differences between bus, ring and star topologies. List the various wiring standards that are available for use within these topologies.

From the videos, pick one hacker profiled and describe the types of attacks they used in exploiting vulnerabilities of the networks that they targeted. What opening did they gain access through? How were they detected?

HubRepeater operates at the physical layer. They repeat incoming frames without examining the MAC address in the frame.

Bridges connects “two or more media segments on the same subnet, and filters traffic between both segments based on the MAC address in the frame. They divide a network into segments to reduce traffic congestion and excessive collisions” (Harris, 2013) by connecting two networks and passes traffic between them based only on the node address, so that traffic between nodes on one network does not appear on the other network.  Bridges operate in the data link OSI layer.

Switches operate at data link layer. A multiport bridge that performs filtering based on MAC addresses can process multiple frames simultaneously, guaranteed bandwidth to each switch port. Switches offer guaranteed bandwidth. (Webtycho, 2013)

Routers assign a new address per port which allows it to connect different networks together. Also discovers information about routes and changes that take place in a “network through its routing protocols; and filters traffic based on ACLs and fragments packets.” (Webtycho, 2013) Because of their network level, they can “calculate at the shortest and economical path between the sending and receiving hosts” (Harris, 2013). Routers operate in the network OSI layer.

Gateways- can be a combination of hardware andor software that connects individual LANS to a larger network and can act like a translator. This usually involves converting different protocols. For example, a “gateway could be used to convert a TCPIP packet to a NetWare IPX packet”. (Webtycho, 2013) Gateways operate in all seven OSI layers.

Reference

Harris, S. (2013). Alll-in-One CISSP. New York: McGraw-Hill.

Webtycho, U. (2013, October).Network Course Content Material . Adelphia, Maryland.

Week 4 DQ 1

After reading/viewing this week’s materials, please respond to one or more of the following questions.

Describe in your own words the differences between steganography, digital watermarking, and digital rights management.

Choose three of the basic cryptosystems and give an overview of each.

Describe the operation of a one-time pad (OTP) and give an example of a device that uses an OTP either from your own experience or from research.

A one-time pad (OTP) uses a pad of random values, where a plaintext message that needs to be encrypted is converted into bits. The encryption process uses a binary mathematic function exclusive-OR (XOR) that is applied to two bits and when combining the bits, if both values are the same the result is 0 (1 XOR 1=0)m, but if the values are different from each other the result is 1(1 XOR 0=1). For instance when User A and User B “produce a huge number of random bits and share them secretly. When User A has a message to send to User B, User A retrieves a number of random bits equal to the length of User A’s message, and uses them to be the message’s key. User A applies the exclusive or operation (xor) to the key and the message to produce the encrypted message. The key must be exactly the same size as the message. The key must also consist of completely random bits that are kept secret from everyone except User A and User B. When User B receives the message, User B retrieves the same bits from his copy of the random bit collection. User B must retrieve the same random bits in exactly the same order that User A used them. Then User B uses the sequence of random bits to decrypt the message. User B applies the xor operation to the message and the key to retrieve the plain text.” (Cryptosmith, 2007) An example of a device that uses the one-time pad would be a mobile phone.

 

Reference

Cryptosmith, (2007). One-Time Pads, Retrieved from: http://b.cryptosmith.com/2007/06/09/one-time-pads/

Week 4 DQ 2

After reading/viewing this week’s materials, please respond to one or more of the following questions.

What are the strengths and weaknesses of symmetric key cryptography? Give an example of where this type of cryptography is used. What are the strengths and weaknesses of asymmetric key cryptography? Give an example of where this type of cryptography is used.

What are the types of message integrity controls and what benefit is provided by them? Give a short description of the various secure email protocols that are referenced in the Shon Harris book and the Course Content.

What benefit do digital signatures provide and what are their characteristics? In your own words, what does non-repudiation mean? 

The types of message controls and their benefit include, The One-Way Hash, the benefit it provides a fingerprint of a message by taking a variable-length string and a message and produces a fixed-length value; HMAC, the benefit it provides data origin authentication and data integrity.  A symmetric key is used and concatenated to produce a MAC value that is appended into a message and sent to the receiver; CBC-MAC, the benefit it provides is that the message is encrypted with a symmetric block cipher in CBC mode and the output of the final block of ciphertext is used as the MAC; Hashing, this has various algorithms such as MD2, MD4, MD5, SHA, HAVAL, Tiger. The benefit it provides is that it generates messages digests to detect whether modification has taken place; Digital Signature, the benefit it provides is that it encrypts the sender’s private key.

The various secure email protocols are:

Privacy-Enhanced Mail (PEM) – an internet standard that provides secure-email over the Internet for in-house communication infrastructure that provides authentication, message integrity, encryption, and key management.

Pretty Good Privacy (PGP) – a freeware email security program that was the first widespread public key encryption program. PGP is a complete cryptosystem that uses cryptographic protection to protect email files.

Multipurpose Internet Mail Extension (MIME) – a technical specification that indicates how multimedia data and email attachments are to be transferred; and a mail standard that dictates how mail is formatted, encapsulated, transmitted, and opened.

Harris, Shon. CISSP All-in-One Exam Guide, Sixth Edition. McGraw-Hill/Osborne. © 2013. Books24x7. <http://common.books24x7.com.ezproxy.umuc.edu/toc.aspx?bookid=50527>

Week 5 DQ 1

After reading/viewing this week’s materials, please respond to one or more of the following questions.

What are the steps in the business continuity planning process? Why is a clear understanding of a company’s enterprise architecture critical to this process?

Describe the steps in a Business Impact Analysis (BIA).

What different loss criteria types can be associated with threats identified during the Business Impact Analysis process? 

The following are the steps in the business continuity planning process. It is extremely important to have a clear understanding of the company’s enterprise architecture because you have to know what you’re protecting and how it would affect the organization and its stakeholders if those assets identified were damaged or destroyed.

Develop the continuity planning policy statement. Write a policy that provides the guidance necessary to develop a BCP, and that assigns authority to the necessary roles to carry out these tasks (Harris, 2013).

Conduct the business impact analysis (BIA). Identify critical functions and systems and allow the organization to prioritize them based on necessity. Identify vulnerabilities and threats, and calculate risks (Harris, 2013).

Identify preventive controls. Once threats are recognized, identify and implement controls and countermeasures to reduce the organization’s risk level in an economical manner (Harris, 2013).

Develop recovery strategies. Formulate methods to ensure systems and critical functions can be brought online quickly (Harris, 2013).

Develop the contingency plan. Write procedures and guidelines for how the organization can still stay functional in a crippled state (Harris, 2013).

Test the plan and conduct training and exercises. Test the plan to identify deficiencies in the BCP, and conduct training to properly prepare individuals on their expected tasks (Harris, 2013).

Maintain the plan. Put in place steps to ensure the BCP is a living document that is updated regularly (Harris, 2013).

Reference:

Harris, S. (2013). CISSP All-In-One Exam Guide, Sixth Edition. [Books24x7 version] Available fromhttp://common.books24x7.com/toc.aspx?bookid=50527Week 5 DQ 2

After reading/viewing this week’s materials, please respond to one or more of the following questions.

Describe the differences between the hot, warm, and cold site methods of facility recovery.

Define the full, incremental, and differential backups and describe the differences between these data backup types.

Describe the differences between disk shadowing, electronic vaulting, and remote journaling. What is disk duplexing and how does it differ from disk mirroring? 

Effective data recovery plans must include hot sites, warm sites and cold sites. When the capabilities of each site is considered, companies are better able to predict the recovery time following a disaster. Knowing how long it will take until systems begin running again is vital. A hot site is considered “proactive”. It allows a company to keep servers and a live backup site running incase a disaster occurs. This is unlike a warm or “preventive” site which enables the pre-installation of a company’s hardware and it allows the company to preconfigure bandwidth necessities. In a warm site, all a company would have to do is simply load software, as well as data in order to restore the business’ systems. Cold sites are also referred to as ‘recovery’ sites. These sites include data center space, power and network connectivity that is available whenever a company may need it. In these facilities, a company’s logistical support team would assist in the moving of hardware into the data center and get the company back up and running. This process may take an extended period of time, unlike a transition into a hot site where there would be immediate cutover if disaster were to arise. Hot sites are essential for mission critical sites (Core X Change, 2014).

References

Core X Change. (2014). Disaster Recovery Hot, Warm and Cold Sites: Key Differences.Colocation & Connectivity by Zayo. Retrieved from https://www.corexchange.com/blog/disaster-recovery-hot-warm-cold-sites-key-differences

Week 6 DQ 2

After reading/viewing this week’s materials, please respond to one or more of the following questions.

What is a View-based access control in database? What is a Data warehouse? What is Online Transaction Processing (OLTP)?

What is Change Management and how is it used to control security breaches? What is Configuration Management and how is it used to control security breaches? What is Patch management and how is it used to control security breaches?

In a database, to control security, lock controls are implemented and tested using the ACID test. Explain the following terms for each letter within the ACID method: Atomicity, Consistency, Isolation, Durability. 

The ACID method consist of atomicity which divides transactions into units of work and ensures that all modifications either take effect or none takes effect- where the database either commits or is rolled back; consistency is where a transaction must follow the integrity policy developed for that particular database and ensure all data are consistent in the different databases; isolation is where transactions execute in isolation until completed, without interacting with other transactions; and durability which pertains to once the transaction is verified as accurate on all systems it is committed and the databases cannot be rolled back.

Week 7 DQ 1

After reading/viewing this week’s materials, please respond to one or more of the following questions.

Describe the administrative management practices of separation of duties, job rotation, and mandatory vacations and their role within operations security.

Describe the differences between the following sanitization methods of media control: clearing, purging, zeroization, and degaussing. What is data remanence? 

The difference between the following sanitization methods of media control are:

Clearing – a process of removing data from media that it is not readily retrieved using routine operating system commands or data recovery software.

Purging –method of removing the data on media making it unrecoverable even with great effort.)

Zeroization- method of overwriting data on media with a pattern designed to ensure that the data cannot be recovered

Degaussing – the process of magnetically scrambling the patterns on a tape or disk that represents the data stored on the disk and destroying the media through either shredding crushing, or burning-

Data remanence is the residual physical representation of data that remains on the drive even after the data has been removed or erased.

Week 7 DQ 2

After reading/viewing this week’s materials, please respond to one or more of the following questions.

Describe the different methods of RAID. What is RAIT?

Define the different types of trusted recovery. What is meant by the term “fail secure”?

Describe three of the following attack types in the Operation Security domain: man-in-the-middle, mail bombing, war-dialing, ping-of-death, teardrop, and slamming-and-cramming 

The different methods of RAID consist of RAID 0 which deals with data striping, RAID 1 handles mirroring, RAID 2 where data parity are created with a hamming code which identifies any errors, RAID 3 is considered the Byte-level parity, where data is striping over all the drives and the parity data is held on one drive, RAID 4 is where parity is created at the block-level, RAID 5 is where data is written in disk sector units to all the drives-this is the most widely used because of its redundancy, RAID 6 is the fault tolerance, which is a second set of parity data written to all drives, RAID 10 is where data are simultaneously mirrored and striped across several drives and can support multiple drive failures.

Redundant Array of Independent Tapes (RAIT) is similar to RAID but it uses tape drives instead of disk drives. In RIAT data is striped in parallel to multiple tapes drives with or without redundant parity drive.

Week 8 DQ 1

As the course wraps up this week, please share your reflections on this course, including lessons learned. 

What are you goals moving forward?Though a very challenging and fast-paced class, I learned quite a bit in each of the CISSP domains. It is easy to see why an exam of this level is contingent upon five years of job experience in at least two of the domains (although you can take the exam without the experience and only achieve SSCP) ((ISC)2, 2014). 

From the perspective of taking the exam, I will likely take another couple months to circle back to each domain take more practice tests, and really focus on topics that need more attention. Though this was an eight week class, the scope of the CISSP is very large and requires a lot of attention. 

I haven’t yet received feedback on my risk assessment paper, but I’ll say that it was a challenging yet rewarding assignment. It was great to take the topics we learned in class and directly apply them to a project, which isn’t far from what is in the real world. If I wasn’t a procrastinator, I could have easily doubled or tripled the length of this paper, given the topics I learned in this class that I wanted to apply to GFI’s, such as writing more detail about a security policy, vulnerability management, etc. I’ll have to leave that for a other courses, which I hope to be able to take. 

All in all, this was a great class. I would have much preferred not to have taken it online, and take it in a 16 week session, but there is still a lot I’ve learned that I will be able to apply to my current job to make me a better Information Assurance Auditor. 

Good luck to you all in your future studies!

Works Cited

(ISC)2. (2014). How to Get Your CISSP Certification. Retrieved 12 14, 2014, from (ISC)2: https://www.isc2.org/cissp-how-to-certify.aspx

CMIT-321-Executive-Proposal

2703830-252730Executive Proposal

00Executive Proposal

2703830-9525Assignment#01

Student Registration #

00Assignment#01

Student Registration #

327660043815000centercenter9500095000

327660043815000centercenter9500095000

2447925262255Security Testing Software for ADVANCED RESEARCH

00Security Testing Software for ADVANCED RESEARCH

280035055880 Submitted By:

Date of Submission:

OCT. 4, 2014

00 Submitted By:

Date of Submission:

OCT. 4, 2014

Executive Summary

Security testing software is a very important entity for many organizations as it provides security to a company’s network by identifying and testing vulnerabilities before potential hackers can exploit them. Advanced Research has been the victim of cybercriminal efforts to take intellectual assets and sell it to their competitors. It is assumed that our network of corporate documents has been infiltrated by illegal sources more than once.

The following is a plan for implementation and purchase of the CORE IMPACT pro for the protection of Advanced Research. The software product, IMPACT pro, is used internally to test the security of web applications, our network system, wireless networks, endpoints, and also many emailed based social attacks of engineering.

In short, IMPACT pro addresses penetrations by testing essential components to the infrastructure. The software also works to identify vulnerabilities and assess weak points in nearly all organizations networks that deal with cybercrime landscape, the production of layered security items. IMPACT pro also intermingles with the rising demand for the tangible metrics which are used to measure the development in security programs and to share these results with business management and IT.

All these factors have motivated the extensive adoption of comprehensive vulnerability management technologies by many companies. IMPACT pro uses advanced scanning systems and defensive tools which interact with individual dangers to deliver lists of potential weak aspects, or Security Management Systems that make use of algorithms in order to calculate theoretical threats. IMPACT pro’s capability to put on a wide array of real world threat situations allows organizations to test their networking assets and IT. Once a network has been compromised, any and all devices connected to such network have the potential to be compromised as well (Murphy, 2013). In short, IMPACT pro will assist Advanced Research in order to evaluate its most significant compliance-related and security risks.

The Requirement:

The main objective is to prevent access to the intellectual property of Advanced Research Corporation, available through our company’s network, from cybercriminals. Advanced Research is still a fairly young company and because of this, management has been hesitant to budget for expensive security projects in the past, which has resulted in the loss of millions of dollars in the form of research data stolen from its corporate network by cyber thieves. Due to continued potential loss, there is a need to purchase the necessary testing software. After much research, I have found CORE IMPACT pro to be the best solution for Advanced Research. Core Impact is rapidly becoming the standard in vulnerability scanning and penetration testing. This top of the line software features various penetration tests, including remote host-based and network-based, as well as web-based penetration tests, and Wi-Fi network (Stephenson, 2013).

The Proposed Solution:

CORE IMPACT Overview:

Core IMPACT pro will enable Advanced Research to complete independent, proactive penetration analysis on our applications, systems and end uses. It will allow us to replicate malware, data theft techniques, and real world hacking while gaining actionable material that will assist us in finding and fixing our most pressing security weaknesses. After using this product, we can determine the way our awareness policies and defensive security measures detect, react and prevent future attacks.

IMPACT pro will enable us to test:

Critical OS facilities and desktop and server operating systems.

IPS, IDS, network security solutions and firewalls.

End point security solutions such as anti-malware, anti-phishing, antivirus, prevention systems and host based intrusion.

End point applications such as web browser, instant messaging, email readers, business application. Media players and productivity tools.

End user awareness of spam, phishing and social engineering.

Core Impact Pro

The Benefits of IMPACT Pro:

The reasons for recommending Impact Pro, in order to satisfy the security requirements of our organization, are mentioned below:

It can be used efficiently by current internal staff:

We can use this product with our existing skill sets and staff. Its training requirements are slight, provided by the vendor, and easy to implement. It will also allow us to assess our security status as we upgrade, add and update endpoints, web applications, servers and other IT assets on regular basis.

Product stability, quality and security of network:

IMPACT Pro is a commercial-grade security testing software. It tests security against malware, hacking and data crack attempts using that same techniques that the cyber criminals apply. However, this software ensures a network is secure and safe without putting stability of networks, applications or endpoints at risk.

Enabling repeatable and consistent security testing:

IMPACT Pro allows a user to schedule endpoint and network penetration testing and vulnerability validation. This ensures reliability of testing programs by allowing more organized approach to an assessment with strong metrics in order to measure improvement over time.

Testing of web applications and heterogeneous infrastructure:

IMPACT Pro is the only commercial-grade programmed web applications testing software in market. It allows the user to verify both proprietary and commercial web applications that are secure against threats that search for SQL injection, Cross-site scripting, and PHP file insertion vulnerabilities. It can also help to identify weaknesses across many operating system platforms, service packs and versions existing on systems throughout network.

Constant updates for testing against latest attack trends:

A combination of continuing technology acceptance, increasing connectivity, persistent cyber criminals and modification means that security landscape is continually evolving. Core Security also offers regular updates that check for newly exposed vulnerabilities in services, operating systems, wireless networks, end-user applications, web applications and other different possible points of exposure.

How IMPACT Pro varies from other Vulnerability Scanners:

IMPACT Pro conducts continuing research into malware attacks and cutting-edge vulnerabilities, allowing companies to keep its software up-to-date with timely and new testing competences 20 to 30 times per month. Unlike many testing tools that only report potential vulnerabilities for analysis, CORE Pro allows to test present security position across majority of IT infrastructure in order to find real exposures that can be threats to operations.

The software solutions of CORE serve as self-regulating form of dynamic self-assessment. This self-regulating system allows companies to authenticate the efficiency of various security points and determine whether or not they are working correctly and protecting the company. Specific competencies of CORE Pro comprise the capability to test for different types of vulnerabilities across:

Network Systems

Endpoints

Web Applications

Email Users

Wireless Networks

By doing practical testing across all of the aforementioned access points we, as an organization, can separate failure points in our security structure and report these problems rapidly. Advanced Research can also obtain the ability to evaluate efficiency and ROI (Return On Investment) of broadly deployed layered security tools like firewall, anti-virus, data leakage protection, authentication, IDS/IPS, security management systems and compliance on a continuing basis to measure the value of systems and validate past and future spending plans.

Product Cost:

Price of IMPACT Pro starts $30,000/year. The product payment includes:

Software License

All upgrades related to version

Systematic Product Updates

Training.

Customer Support

Reference:

F, B. (2013, July 30). Network Penetration Testing and Research. Retrieved September 30, 2014, from http://ntrs.nasa.gov/search.jsp?R=20140002617

Penetration Testing with Core Impact Pro (Attack Intelligence, Vulnerability Prioritization & Consolidation) Retrieved September 30, 2014, from: http://www.coresecurity.com/core-impact-pro

Replicate Real-World Attacks and Reveal Critical Security Exposures. (2012, January 1). Retrieved September 30, 2014, from http://www.coresecurity.com/files/ attachments/CORE_Pro_product_overview.pdf

Stephenson, P. (2013, February 1). Core Impact Professional. Retrieved September 30, 2014, from http://www.scmagazine.com/core-impact-professional/review/3791/

Cloud Computing: A Case Study

Case Study #4: Cloud Computing

Insert NameCSIA 412 6381

INTRODUCTION

According to Kundra (2010), the United States government is the world’s largest consumer of information technology and yet its poor project implementation and lack of upgraded technology has led to decreased efficiency and a lack of productivity. To mitigate these issues through upgrades can be costly, however, there is another way to address the issues plaguing the technological departments of the federal government. Cloud computing. Cloud computing is a movement within the IT field that allows organizations to increase or decrease the capacity of what is essentially a shared folder as needed without having to update their technological devices or invest in hardware storage devices (Kundra, 2011). The following case study presents to its readers the benefits of ten case studies as well as the reasoning for the benefits identified in a specific study. This case study is designed to analyze ten cloud computing case studies and share the benefits organizations can experience through the use of cloud computing.

CASE STUDY ELEMENTS

Case Study Title Benefits

DoD US Army AEC (Example) Assets will be Better   Utilized, Efficiency Improvements will Shift Resources Toward Higher-Value   Activities (Example)

Department of Defense US Army AEC Faster application upgrades, increased efficiency, reduced hardware and IT staff costs, increased productivity.

DOD PSDT USAF Improved asset utilization (increased 70%), improved productivity in application management, focus shift to service management

DOE Cloud Computing Migration Linked to emerging technologies, purchases only the services needed, can increase capacity at any time, improved productivity in network

HHS Supporting Electronic Health Records Near-instantaneous increases in capacity, quick response to urgent agency needs, improved productivity with end-users

Department of the Interior Email Aggregated demand and accelerated system consolidation

GSA USA.gov Purchase and use of needed services from the cloud provider only, near instantaneous increase in capacity, focus shifted to service management, more responsive to urgent agency needs, improved productivity in network.

NASA World-Wide Telescope Quick response to urgent agency needs, near instantaneous increases and reductions in capacity, accelerated system consolidation, improved productivity in application development and network.

NASA Become a Martian Improved productivity of the network, purchased only the services needed, near-instantaneous increases in capacity, focus shift to service management,

SSA SOASK Increased productivity, service management, tap into private sector innovation, purchase of only services needed, ability to respond quickly to urgent agency needs, aggregated demand

FLRA Case Management System Tap into private sector innovation, improved productivity in application management, quick response to agency needs, improved asset utilization (time of app to deploy is 25% of original time), better linked to emerging technologies.

CLOSER LOOK

HHS Supporting Electronic Health Records

The Department of Health and Human Services (HHS) has employed the use of cloud computing in order to support and enhance the implementation of its Electronic Health Records (EHR) system (Kundra, 2010). In the table above, the benefits that HHS has experienced are outlined and now the reasons why these benefits have been identified will be shared in the table below:

Benefit Identified Reasoning for identification

Near-instantaneous increases in capacity The system can be updated as centers begin using it and future updates are expected to quick and seamless, providing more storage capacity or decreasing as is necessary.

Quick response to urgent agency needs The review process that was conducted to determine how long it would take to implement the technology necessary determined that over a year would pass before HHS would have access to its new systems, however Salesforce (the cloud computing agency) was able to implement their solution in less than three months.

Improved productivity with end-users Implementation of EHR systems is coordinated through this system, streamlining the process for all involved and Salesforce works collaboratively with the end user in order to provide the services needed.

CONCLUSION

Cloud computing is a service that can enhance the performance and productivity of any organization when used correctly and through secured access points. The case studies above give various examples of how organizations from various backgrounds can benefit from the same type of system by tailoring it to the needs of their consumers. While HHS consumers were able to benefit from the uniform implementation of EHR systems, customers of the SSA were able to greatly benefit from the cloud based website created and maintained by the SSA by bypassing long wait times and receiving near instant answers. Whether cloud computing is a service the organization is considering for internal purposes only or it is a service that will be used to benefit the individuals it serves as well, it is a system that all organizations should take into consideration. The increased efficiency, productivity, and storage coupled with the decreased need for continuous technological upgrades and increased staffing, cloud computing is a strategy whose benefits exceed its disadvantages when properly implemented.

References

Kundra, V. (2010). State of public sector cloud computing. CIO Council. Retrieved from: https://cio.gov/wp-content/uploads/downloads/2012/09/StateOfCloudComputingReport-FINAL.pdf

Kundra, V. (2011). Federal cloud computing strategy. Washington, DC. Retrieved from: https://learn.umuc.edu/d2l/le/content/47852/viewContent/2363992/View