Java objective type questions and answers on static keyword



6.What is the output of below program ?

java multiple choice questions with answers

x=10 x=10
x=10 x=20
x=30 x=30
x=30 x=20




Java mcq with answers on this keyword




Core java multiple choice questions with answers on method overloading




Java mutliple choice questions with answers on constructors

  • On the request of many friends we added java multiple questions with answers.
  • Java mcq questions with answers on constructors.
  • We will add all java topics multiple questions with answers for freshers and experienced.



How to get java source files from jar file

  • Jar : Java Archive is a group of .class files.
  • We can create a jar file using following commands
  • jar -cvf  example.jar test.class
  • jar -cvf example.jar *.*
  • To unzip from jar file we need to use following command.
  • jar -xvf  instanceofjava.jar



How to get source code from jar file using java De-compilers

how to get source code from jar file in eclipse


  • Open JD-GUI and File -> open -> open target jar file.
  • It will show java source code.

How to extract java files from jar in eclipse

  •  We can get java source files from jar file by using eclipse also for this we need to add a plugin.
  • Download jar file from http://jd.benow.ca/
  • Unzip it . you will get jd.ide.eclipse.plugin_1.0.0.jar file  and add it to eclipse plugin folder.
  • And restart your eclipse.
  • Add target jar file to a project and now click on the file you will get source code.
  • You can add target jar file using java project->java buildpath -> add external jars option


how to convert jar to java source in eclipse


  • Here i added jstl core jar to eclipse to see java source files.
  • Click on the file now it will show source code 
  • How to extract java files from jar

how to extract java files from jar in eclipse

Hibernate Native sql query with example

  • By Using Hibernate Native SQL we can write database dependent queries as part of hibernate.
  • Hibernate Native Sql allows us to write create , update,  delete and insert queries.
  • We can also call Stored procedures using Hibernate Native Sql.
  • When the query is too complex using HQL then we need to use hibernate sql query.
  • Hibernate uses the org.hibernate.SQLQuery interface for native SQL
    1. SQLQuery is a sub interface of Query
    2. Use createSQLQuery() factory method on Session to create SQLQuery object.
  •  Hibernate SQLQuery must be associated with an existing Hibernate entity or scalar result.



Hibernate native sql insert query example

  1. Session session = sessionFactory.openSession();
  2. session.beginTransaction();
  3.  
  4. SQLQuery insertsqlQuery = session.createSQLQuery("INSERT INTO
  5. Physician(firstname,lastname,fee,hospital)VALUES(?,?,?,?)");
  6.  
  7.  insertsqlQuery.setParameter(0, "Saidesh");
  8.  insertsqlQuery.setParameter(1, "Kilaru");
  9.  insertsqlQuery.setParameter(2, 50); 
  10.  insertsqlQuery.setParameter(3, "Yashoda");        
  11.  insertsqlQuery.executeUpdate();

 Hibernate scalar query example

  • Writing a Hibernate Sql query to get list of  scalars or values from single or multiple tables.
  • Lets see an example on Hibernate scalar query.


  1. String hibernate_sql = "SELECT first_name, fee FROM  Physician";
  2. SQLQuery query = session.createSQLQuery(hibernate_sql);
  3. query.setResultTransformer(Criteria.ALIAS_TO_ENTITY_MAP);
  4. List results = query.list();

 Hibernate named sql Queries

  • Writing a Hibernate Sql query to get entity object by using addEntity() method.
  • Lets see an example on Hibernate named sql query

hibernate native sql query parameters scalar


Hibernate Named Query Introduction Tutorial

  • If we are using HQL or Native SQL queries multiple time then it will be code mess because all queries will be scattered throughout the project.
  • Hibernate Named Query is a way to use queries by giving a name.
  • Hibernate Named Queries will be defined at one place and used anywhere in the project.
  • use of named query in hibernate
  • Writing HQL query in HBM file is called HQL Named Query.
  • Or we can use @NameQuery annotation in entity.
  • For writing Hibernate Named Queries we will use <query> tag in Hibernate mapping file Or @NameQuery annotation in the entity.
  • If we want to create Named Query using hibernate mapping file then we need to use query element.



Advantages of Named Query in Hibernate 

  • Global access
  • Easy to maintain.

Hibernate Named Queries by using Annotations:

  • If we want to create Named Queries using annotations in entity class then we need to use @NameQueries and @NameQuery annotations
  • @NameQuery will be used to create single Query
  • @NameQueries annotations will be used to create multiple Queries. When we are using @NameQueries for every query we need to use @NameQuery annotation.

Hibernate Named Query example by using Annotations:


Hibernate Named Query example join

  1. Query query = session.getNamedQuery("findDocterById");
  2.         query.setInteger("id", 37);
  3.        List empList = query.list();


Hibernate Named Queries by using Hibernate mapping file:

  • We need to configure Hibernate Named Queries as part of Hibernate mapping file.
  • By using <query> element we need to write Hibernate named Queries.

  1. <hibernate_mapping>
  2.             <class  >
  3.             ---------
  4.             </class>
  5.  
  6.  <query name = “findDocterById”>
  7.  <![CDATA[from Docter s where s.id = :id]]>
  8. </query>  

  9.  </hibernate_mapping>     


  1. Query query = session.getNamedQuery("findDocterById");
  2.         query.setInteger("id", 64);
  3.        List empList = query.list();

Hibernate Criteria Query Language (HCQL)

  • In Hibernate we pull the data from data base in two ways.
  • session.get()/ session.load()
  • Hibernate query langugae
  • Now we will discuss about the third way Hibernate criteria query language which solves the problems of above two approaches.




Hibernate Criteria Query Language / HCQL

  • In order to fetch the records based on some criteria we use Hibernate Criteria Query Language.
  • We can make select operation on tables by applying some conditions by using HCQL.
  • Criteria API is the alternative to HQL in object oriented approach.  
  • We can execute only SELECT statements using Criteria; we can’t execute UPDATE, DELETE statements using Criteria.

Advantages of  Hibernate Criteria Query Language (HCQL) 

  • Criteria API allows us to define criteria query object by applying rules ,filtration and logical conditions
  • So that we can add criteria to query.
  • Criteria is also database independent, Because it internally generates HQL queries.
  • Criteria is suitable for executing dynamic queries
  • Criteria API also include Query by Example (QBE) functionality for supplying example objects.
  • Criteria also includes projection and aggregation methods
     

Criteria Interface:

  • Criteria Interface has some methods to specify criteria.
  • Hibernate session interface has a method named createCriteria() to create criteria.

  1. public interface Criteria  extends CriteriaSpecification

  1. Criteria criteria = session.createCriteria(Student.class);
  2. List<Student> studentList= criteria .list();

Methods of Criteria interface:

  
Hibernate add crieria example hcql


Order Class

  1. public class Order extends Object implements Serializable
 
  • By using Order class we can sort the records in ascending or descending order.
  • Order class provides two different methods to make ascending and descending.
  1. public static Order asc(String propertyName)
  2. public static Order desc(String propertyName)

Hibernate Criteria query example using order class

  1.  Criteria criteria = session.createCriteria(Product.class)
  2.  
  3. // To sort records in descening order
  4. criteria.addOrder(Order.desc("price"));
  5.  
  6. // To sort records in ascending order
  7. criteria.addOrder(Order.asc("price"));

Restrictions Class

  1. public class Restrictions extends Object

  • Restrictions class provides methods that can be used  add restrictions (conditions) to criteria object.
  • We have many methods in Restrictions class some of the commonly using methods are.

hibernate criteria add example interview questions


Hibernate Criteria query example using Restrictions class


  1.  Criteria criteria = session.criteriaeateCriteria(Product.class);
  2. // To get records having price more than 3000
  3. criteria.add(Restrictions.gt("price", 3000));
  4.  
  5. // To get records having price less than 2000
  6. criteria.add(Restrictions.lt("price", 2000));
  7.  
  8. // To get records having productName starting with criteriaite
  9. criteria.add(Restrictions.like("productName", "criteriaite%"));
  10.  
  11. // Case sensitive form of the above restriction.
  12. criteria.add(Restrictions.ilike("productName", "zara%"));
  13.  
  14. // To get records having price in between 1000 and 2000
  15. criteria.add(Restrictions.between("price", 1000, 2000));
  16.  
  17. // To check if the given property price is null
  18. criteria.add(Restrictions.isNull("price"));
  19.  
  20. // To check if the given property is not null
  21. criteria.add(Restrictions.isNotNull("price"));
  22.  
  23. // To check if the given property price is empty
  24. criteria.add(Restrictions.isEmpty("price"));
  25.  
  26. // To check if the given property price is not empty
  27. criteria.add(Restrictions.isNotEmpty("price"));
  28.  
  29. List results = criteria.list();

Pagination using Hibernate Criteria

  • By using criteria methods setFirstResult() and setMaxResults() we can achieve pagination concept.


  1. Criteria criteria = session.createCriteria(Product.class);
  2. criteria.setMaxResults(10);
  3. criteria.setFirstResult(20);

Projections class in Hibernate

  1. public final class Projections extends Object

  • By Using org.hibernate.criterion.Projections  class methods we can perform operations like minimum , maximum, average , sum and count.

Hibernate Criteria query example using Projection class

  1. Criteria criteria = session.criteriaeateCriteria(Product.class);
  2.  
  3. // To get total row count.
  4. criteria.setProjection(Projections.rowCount());
  5.  
  6. // To get average price.
  7. criteria.setProjection(Projections.avg("price"));
  8.  
  9. // To get distinct countof name
  10. criteria.setProjection(Projections.countDistinct("name"));
  11.  
  12. // To get maximum price
  13. criteria.setProjection(Projections.max("price"));
  14.  
  15. // To get minimum price
  16. criteria.setProjection(Projections.min("price"));
  17.  
  18. // To get sum of price
  19. criteria.setProjection(Projections.sum("price"))

Hibernate Query Langugae (HQL)

Hibernate Query Language:

  • HQL is one of the feature of Hibernate.
  • HQL is same like SQL but here it uses class name as table name and variables as columns.
  • HQL is Database independent query language.
  • An object oriented form of SQL is called HQL
  • HQL syntax is very much similar to SQL syntax.  Hibernate Query Language queries are formed by using Entities and their properties, where as SQL quires are formed by using Tables and their columns.
  • HQL Queries are fully object oriented..
  • HQL Queries are case sensitive.


Advantages of HQL:

  • Database Independent
  • HQL Queries supports inheritance and polymorphism
  • Easy to learn. 

Query Interface:

  • Query interface used to Represent HQL query in the form of query object.
  • The object of  query will be created by calling createQuery(hql query) method of session.

  1.  Session hsession = sf.openSession();
  2. Query query = hsession.createQuery(hql query);

  • Send the query object to hibernate software by calling the list method.
  • Hibernate returns an ArrayList object render the ArrayList object and display the output to client.

  1. List l = query.list();
  2. ArrayList emplist = (ArrayList)l;

  • Query is an interface which is available as port of org.hibernate package. We can not create the object to query interface. We can create a reference variable and it holds implementation class object.

 Methods of Query Interface:

HQL hibernate query language

Advantages and disadvantages of hibernate compared to jdbc

Advantages of Hibernate over JDBC:

  1. Hibernate is an ORM tool
  2. Hibernate is an open source framework.
  3. Better than JBDC.
  4. Hibernate has an exception translator , which converts checked exceptions of JDBC in to unchecked exceptions of hibernate. So all exceptions in hibernate are unchecked exceptions and Because of this no need to handle exceptions explicitly.
  5. Hibernate supports inheritance and polymorphism.
  6. With hibernate we can manage the data stored across multiple tables, by applying relations(association)
  7. Hibernate has its own query language called Hibernate Query Language. With this HQL hibernate became database independent.
  8. Hibernate supports relationships like One-To-One, One-To-Many, Many-To-One ,Many-To-Many.
  9. Hibernate has Caching mechanism. using this number of database hits will be reduced. so performance of an application will be increases.
  10. Hibernate supports lot of databases.
  11. Hibernate supported databases List.
  12. Hibernate is a light weight framework because hibernate uses POJO classes for data transfer between application and database.
  13. Hibernate has versioning and time stamp feature with this we can know how many number of times data is modified.
  14. Hibernate also supports annotations along with XML.
  15. Hibernate supports Lazy loading.
  16. Hibernate is easy to learn it is developers friendly.
  17. The architecture is layered to keep you isolated from having to know the underlying APIs.
  18. Hibernate maintains database connection pool.
  19. Hibernate  has Concurrency support.
  20. Using Hibernate its Easy to maintain and it will increases productivity


Disadvantages of Hibernate Compared to JDBC!!:
  • Hibernate is slow compared to JDBC because of generating many sql queries at run time but this is not considered as dis advantage in my view.
  • Below are some of the dis advantages but these are not applicable to small applications. But we have given some possible scenarios. 

advantages and disadvantages of hibernate

Hibernate supported databases List

  • Hibernate is an ope source framework and also called as an ORM tool.
  • Hibernate supports lot of databases. 
  • Please find below list of databases that are supported by hibernate.
  • Hibernate supported databases list 


  1. DB2    
  2. DB2 AS/400   
  3. DB2 OS390 
  4. FrontBase
  5. Firebird  
  6. HypersonicSQL 
  7. H2 Database  
  8. Informix   
  9. Ingres  
  10. Interbase
  11. MySQL5    
  12. MySQL5 with InnoDB    
  13. MySQL with MyISAM    
  14. Mckoi SQL
  15. Microsoft SQL Server 2000    
  16. Microsoft SQL Server 2005  
  17. Microsoft SQL Server 2008  
  18. Oracle
  19. Oracle 9i   
  20. Oracle 10g    
  21. Oracle 11g      
  22. PostgreSQL  
  23. Progress   
  24. Pointbase
  25. SAP DB
  26. Sybase
  27. Sybase 

  •  Following is the list of various important databases dialects for corresponding database.

hibernate supported databases

Big data Hadoop interview questions answers freshers and experienced - Part 2

Big data hadoop interview questions and answers freshers and experienced



Hadoop interview questions and answers for freshers and experienced - Part 1


31.Using linux command line. how will you Copy file from your local directory to HDFS

  • hadoop fs -put localfile hdfsfile

32.What platforms and Java versions does Hadoop run on?

  •  Java 1.6.x or higher, preferably from Sun. Linux and Windows are the supported operating systems, but BSD, Mac OS/X, and OpenSolaris are known to work. (Windows requires the installation of Cygwin).



33.Is there an easy way to see the status and health of a cluster?

  • There are web-based interfaces to both the JobTracker (MapReduce master) and NameNode (HDFS master) which display status pages about the state of the entire system. 
  • By default, these are located at http://job.tracker.addr:50030/ and http://name.node.addr:50070/.
  • The JobTracker status page will display the state of all nodes, as well as the job queue and status about all currently running jobs and tasks.
  • The NameNode status page will display the state of all nodes and the amount of free space, and provides the ability to browse the DFS via the web.
  • You can also see some basic HDFS cluster health data by running:
  • $ bin/hadoop dfsadmin –report

34.Do I have to write my job in Java?

  • No. There are several ways to incorporate non-Java code.

35.How do I submit extra content (jars, static files, etc) for my job to use during runtime?

  • The distributed cache feature is used to distribute large read-only files that are needed by map/reduce jobs to the cluster. The framework will copy the necessary files from a URL (either hdfs: or http:) on to the slave node before any tasks for the job are executed on that node.
  • The files are only copied once per job and so should not be modified by the application.
  • Copying content into lib is not recommended and highly discouraged. Changes in that directory will require Hadoop services to be restarted.
36.How do I change final output file name with the desired name rather than in partitions like part-00000, part-00001?

  • You can subclass the OutputFormat.java class and write your own. You can look at the code of TextOutputFormat MultipleOutputFormat.java etc. for reference. It might be the case that you only need to do minor changes to any of the existing Output Format classes.
  • To do that you can just subclass that class and override the methods you need to change.

37.How do you gracefully stop a running job?

  • hadoop job -kill <JOBID>

38.How the HDFS Blocks are replicated?

  • A. HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks; all blocks in a file except the last block are the same size. 
  • The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once and have strictly one writer at any time. 
  • The NameNode makes all decisions regarding replication of blocks. HDFS uses rack-aware replica placement policy. In default configuration there are total 3 copies of a data block on HDFS, 2 copies are stored on data nodes on same rack and 3rd copy on a different track.

39.How the Client communicates with HDFS?

  • A.  The Client communication to HDFS happens using Hadoop HDFS API. Client applications talk to the NameNode whenever they wish to locate a file, or when they want to add/copy/move/delete a file on HDFS. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives.
  •  Client applications can talk directly to a DataNode, once the NameNode has provided the location of the data.

40.What is HDFS Block size? How is it different from traditional file system block size?
  • In HDFS data is split into blocks and distributed across multiple nodes in the cluster.
  • Each block is typically 64Mb or 128Mb in size. Each block is replicated multiple times.
  • Default is to replicate each block three times. Replicas are stored on different nodes.
  • HDFS utilizes the local file system to store each HDFS block as a separate file. HDFS
  • Block size can not be compared with the traditional file system block size.

41.When is the reducers are started in a MapReduce job?
  • In a MapReduce job reducers do not start executing the reduce method until the all Map jobs have completed. Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. The programmer defined reduce method is called only after all the mappers have finished.

42.If reducers do not start before all mappers finish then why does the progress on Map Reduce job shows something like Map(60%) Reduce(15%)? Why reducers progress percentage is displayed when mapper is not finished yet? 
  • Reducers start copying intermediate key-value pairs from the mappers as soon as they are available. 
  • The progress calculation also takes in account the processing of data transfer which is done by reduce process, therefore the reduce progress starts showing up as soon as any intermediate key-value pair for a mapper is available to be transferred to reducer.
  • Though the reducer progress is updated still the programmer defined reduce method is called only after all the mappers have finished.

43.What is the Hadoop MapReduce API contract for a key and value Class?
  • The Key must implement the org.apache.hadoop.io.WritableComparable interface.
  • The value must implement the org.apache.hadoop.io.Writable interface.

44.What are combiners? When should I use a combiner in my MapReduce Job?
  • Combiners are used to increase the efficiency of a MapReduce program. 
  • They are used to aggregate intermediate map output locally on individual mapper outputs. Combiners can help you reduce the amount of data that needs to be transferred across to the reducers.
  • You can use your reducer code as a combiner if the operation performed is commutative and associative. 
  • The execution of combiner is not guaranteed, Hadoop may or may not execute a combiner. Also, if required it may execute it more then 1 times. Therefore your
  • MapReduce jobs should not depend on the combiners execution.

45.Where is the Mapper Output (intermediate kay-value data) stored ?
  • A. The mapper output (intermediate data) is stored on the Local file system (NOT HDFS) of each individual mapper nodes. 
  • This is typically a temporary directory location which can be setup in config by the hadoop administrator. The intermediate data is cleaned up after the Hadoop Job completes.

46.Name the most common InputFormats defined in Hadoop? Which
one is default ?
 
  • Following 2 are most common InputFormats defined in Hadoop
  1. TextInputFormat
  2. KeyValueInputFormat
  3. SequenceFileInputFormat
  • TextInputFormatis the hadoop default

47. What is the difference between TextInputFormat and KeyValueInputFormat class
  • TextInputFormat: It reads lines of text files and provides the offset of the line as key to the Mapper and actual line as Value to the mapper KeyValueInputFormat: Reads text file and parses lines into key, val pairs.
  • Everything up to the first tab character is sent as key to the Mapper and the remainder of the line is sent as value to the mapper.

48. What is InputSplit in Hadoop

  • When a hadoop job is run, it splits input files into chunks and assign each split to a mapper to process. This is called Input Split

49. How is the splitting of file invoked in Hadoop Framework
  • It is invoked by the Hadoop framework by running getInputSplit()method of the Input format class (like FileInputFormat) defined by the user

50. Consider case scenario: In M/R system,
  • HDFS block size is 64 MB
  • Input format is FileInputFormat
  • We have 3 files of size 64K, 65Mb and 127Mb
  • then how many input splits will be made by Hadoop framework?
  • Hadoop will make 5 splits as follows
  • 1 split for 64K files
  • 2 splits for 65Mb files
  • 2 splits for 127Mb file

51. What is the purpose of RecordReader in Hadoop
  • The InputSplithas defined a slice of work, but does not describe how to access it. The RecordReaderclass actually loads the data from its source and converts it into (key, value) pairs suitable for reading by the Mapper. The RecordReader instance is defined by the InputFormat

52. After the Map phase finishes, the hadoop framework does
"Partitioning, Shuffle and sort". Explain what happens in this phase?

  • Partitioning is the process of determining which reducer instance will receive which intermediate keys and values. Each mapper must determine for all of its output (key, value) pairs which reducer will receive them. It is necessary that for any key, regardless of which mapper instance generated it, the destination partition is the same
  • Shuffle
  • After the first map tasks have completed, the nodes may still be performing several more map tasks each. But they also begin exchanging the intermediate outputs from the map tasks to where they are required by the reducers. This process of moving map outputs to the reducers is known as shuffling.
  • Sort
  • Each reduce task is responsible for reducing the values associated with several intermediate keys. The set of intermediate keys on a single node is automatically sorted by Hadoop before they are presented to the Reducer

53. If no custom partitioner is defined in the hadoop then how is data partitioned before its sent to the reducer?
  • The default partitioner computes a hash value for the key and assigns the partition based on this result.

54. What is a Combiner
  • The Combiner is a "mini-reduce" process which operates only on data generated by a mapper.
  • The Combiner will receive as input all data emitted by the Mapper instances on a given node.
  • The output from the Combiner is then sent to the Reducers, instead of the output from the Mappers.

55. Give an example scenario where a combiner can be used and where it cannot be used
  • There can be several examples following are the most common ones
  • Scenario where you can use combiner
  • Getting list of distinct words in a file
  • Scenario where you cannot use a combiner
  • Calculating mean of a list of numbers

56.What is job tracker
  • Job Tracker is the service within Hadoop that runs Map Reduce jobs on the cluster

57. What are some typical functions of Job Tracker
  • The following are some typical tasks of Job Tracker
  • Accepts jobs from clients
  • It talks to the NameNode to determine the location of the data
  • It locates TaskTracker nodes with available slots at or near the data
  • It submits the work to the chosen Task Tracker nodes and monitors progress of each task by receiving heartbeat signals from Task tracker

58.What is task tracker
  • Task Tracker is a node in the cluster that accepts tasks like Map, Reduce and Shuffle operations - from a JobTracker

59. Whats the relationship between Jobs and Tasks in Hadoop
  • One job is broken down into one or many tasks in Hadoop.

60. Suppose Hadoop spawned 100 tasks for a job and one of the task
failed. What will hadoop do ?
  • It will restart the task again on some other task tracker and only if the task fails more than 4 (default setting and can be changed) times will it kill the job

61.Hadoop achieves parallelism by dividing the tasks across many nodes, it is possible for a few slow nodes to rate-limit the rest of the program and slow down the program. What mechanism Hadoop provides to combat this 
  • Speculative Execution
62. How does speculative execution works in Hadoop
  • Job tracker makes different task trackers process same input. 
  • When tasks complete, they announce this fact to the Job Tracker. Whichever copy of a task finishes first becomes the definitive copy. If other copies were executing speculatively, Hadoop tells the Task Trackers to abandon the tasks and discard their outputs.
  • The Reducers then receive their inputs from whichever Mapper completed successfully, first.

How to call non static method from static method java

  • Class is a template and when we create object instance variables gets memory.
  • If we create two objects variables get memory in two objects. So instance variables gets memory whenever object is created.
  • When we create variables as static , memory will not be created in objects because static means class level and it belongs to class not object but we can access static variables and static methods from objects.
  • Calling static method from non static method in java 
  • In our scenario calling a non static method from static method in java.
  • If we are calling a non static method then we need to use object so that it will call corresponding object non static method.
  • Non static methods will be executed or called by using object so whenever we want to call a non static method from static method we need to create an instance and call that method.
  • If we are calling non static method directly from a static method without creating object then compiler throws an error. 



Program #1: Java example program to call non static method from static method. 


calling non static method from static method.png

  • In the above program we are trying to call non static method of class from a static method so compiler throwing error.
  • Can not make a static reference to the non- static method nonStaticMethod() from the type StaticMethodDemo
  • So without object we can not cal non static method of a class.
  • Check the below example program calling non static method from static method by creating object of that class and on that object calling non static method.

Program #2: Java example program to call non static method from static method.

  1. package com.instanceofjava.staticinterviewquestions;
  2. //www.instanceofjava.com
  3.  
  4. public class StaticMethodDemo {
  5.  
  6.     
  7.     void nonStaticMethod(){
  8.         System.out.println("non static method");
  9.     }
  10.     
  11.     public static void staticMethod(){
  12.         
  13.         new StaticMethodDemo().nonStaticMethod();
  14.     }
  15.     
  16.     
  17.     
  18.     public static void main(String[] args) {
  19.         
  20.         StaticMethodDemo.staticMethod();
  21. }
  22.  
  23. }
   
Output:

  1. non static method

Calling static method from non static method in java

  • Static means class level and non static means object level.
  • Non static variable gets memory in each in every object dynamically.
  • Static variables are not part of object and while class loading itself all static variables gets memory.
  • Like static variables we have static methods. Without creating object we can access static methods.
  • Static methods are class level. and We can still access static methods in side non static methods.
  • We can call static methods without using object also by using class name.
  • And the answer to the question of  "is it possible to call static methods from non static methods in java" is yes.
  • If we are calling a static method from non static methods means calling a single common method using unique object of class which is possible. 


Program #1: Java example program to call static method from non static method.


  1. package com.instanceofjava.staticinterviewquestions;
  2. public class StaticMethodDemo {
  3.  
  4. void nonStaticMethod(){
  5.         
  6.         System.out.println("Hi i am non static method");
  7.         staticMethod();
  8.  }
  9.     
  10.  public static void staticMethod(){
  11.         
  12.         System.out.println("Hi i am static method");
  13.   }
  14.     
  15.  public static void main(String[] args) {
  16.         StaticMethodDemo obj= new StaticMethodDemo();
  17.         
  18.         obj.nonStaticMethod();
  19.  
  20.     }
  21.  
  22. }
 Output:

  1. Hi i am non static method
  2. Hi i am static method

  • In the above program we have created object of the class and called a non static method on that object and in side non static method called a static method.
  • So it is always possible to access static variables and static methods in side non static methods

 Program #2: Java example program to call static method from non static method.


Top 60 Hadoop interview questions and answers for freshers and experienced - Part 1

hadoop interview questions and answers for frehsers and experienced


1.What is HDFS?

  • HDFS, the Hadoop Distributed File System, is a distributed file system designed to hold very large amounts of data (terabytes or even petabytes), and provide high-throughput access to this information.
  • Files are stored in a redundant fashion across multiple machines to ensure their durability to failure and high availability to very parallel applications


 
2.What are the Hadoop configuration files?

  1.     hdfs-site.xml
  2.     core-site.xml
  3.     mapred-site.xml


3.How NameNode Handles data node failures?

  • NameNode periodically receives a Heartbeat and a Blockreport from each of the DataNode in the cluster.Receipt of a Heartbeat implies that the DataNode is functioning properly.
  • When NameNode notices that it has not received a heartbeat message from a DataNode after a certain amount of time, the DataNode is identified as dead. Since blocks will be under replicated the system NameNode begins replicating the blocks that were stored on the dead DataNode.
  • The NameNode takes responsibility of the replication of the data blocks from one DataNode to another.The replication data transfer happens directly between DataNodes and the data never passes through the NameNode.


4.What is MapReduce in Hadoop?

  • Hadoop MapReduce is a specially designed framework for distributed processing of large data sets on clusters of commodity hardware. 
  • The framework itself can take care of scheduling tasks, monitoring them and reassigning of failed tasks.

5.What is the responsibility of NameNode in HDFS ?

  • NameNode is a master daemon for creating metadata for blocks, stored on DataNodes. Every DataNode sends heartbeat and block report to NameNode.
  • If NameNode not receives any heartbeat then it simply identifies that the DataNode is dead. This NameNode is the single Point of failover. If NameNode goes down HDFS cluster is inaccessible.

6.What it  the responsibility of SecondaryNameNode in HDFS?

  • SecondaryNameNode is the mater Daemon to create Housekeeping work for NameNode.
  • SecondaryNameNode is not the backup of NameNode but it is the backup for metadata of the NameNode.

7.What is the DataNode in HDFS?

  • DataNode is the slave daemon of NameNode for storing actual data blocks. Each DataNode stores number of 64MB blocks.

8.What is the JobTracker in HDFS?

  • JobTracker is a mater daemon for assigning tasks to TaskTrackers in different DataNodes where it can find data blocks for input file.

9.How can we list all job running in a cluster?

  •  ]$ hadoop job -list

10.How can we kill a job?

  • ]$ hadoop job –kill jobid

11.Whats the default port that jobtrackers listens to

  •  http://localhost:50030

12.Whats the default port where the dfs namenode web ui will listen on

  •     http://localhost:50070

13.What is Hadoop Streaming

  • Streaming is a generic API that allows programs written in virtually any language to be used as Hadoop Mapper and Reducer implementations


14.Whats is Distributed Cache in Hadoop

  • Distributed Cache is a facility provided by the Map/Reduce framework to cache files (text, archives, jars and so on) needed by applications during execution of the job.
  • The framework will copy the necessary files to the slave node before any tasks for the job are executed on that node.

15.What is the benifit of Distributed cache, why can we just have the file in HDFS and have the application read it

  • This is because distributed cache is much faster. It copies the file to all trackers at the start of the job.
  • Now if the task tracker runs 10 or 100 mappers or reducer, it will use the same copy of distributed cache. on the other hand, if you put code in file to read it from
  • HDFS in the MR job then every mapper will try to access it from HDFS hence if a task    tracker run 100 map jobs then it will try to read this file 100 times from HDFS. Also
  • HDFS is not very efficient when used like this.


16.Is it possible to provide multiple input to Hadoop? If yes then how can you give multiple directories as input to the Hadoop job

  • Yes, The input format class provides methods to add multiple directories as input to a Hadoop job

17.What will a hadoop job do if you try to run it with an output directory that is already present? Will it overwrite it - warn you and continue - throw an exception and exit

  • The hadoop job will throw an exception and exit.


18.How can you set an arbitary number of mappers to be created for a job in Hadoop

  • This is a trick question. You cannot set it

19.How can you set an arbitary number of reducers to be created for a job in Hadoop

  • You can either do it programmatically by using method setNumReduceTasksin the JobConfclass or set it up as a configuration setting

20.How will you write a custom partitioner for a Hadoop job

  • To have hadoop use a custom partitioner you will have to do minimum the following three
  1. Create a new class that extends Partitioner class
  2. Override method getPartition
  3. In the wrapper that runs the Map Reducer, either  add the custom partitioner to the job programtically using method setPartitionerClass or add the custom partitioner to the job as a config file (if your wrapper reads from config file or oozie)

21.How did you debug your Hadoop code?

  • There can be several ways of doing this but most common ways are
  1.     By using counters
  2.     The web interface provided by Hadoop framework

22.What does the term "Replication factor" mean

  • Replication factor is the number of times a file needs to be replicated in HDFS


23.What is the default replication factor in HDFS

  • The default replication factor is 3

24. What is the typical block size of an HDFS block

  • The default HDFS block size is 64Mb or 128Mb

25.What is the benefit of having such big block size (when compared to block size of linux file system like ext)

  • It allows HDFS to decrease the amount of metadata storage required per file (the list of blocks per file will be smaller as the size of individual blocks increases). Furthermore, it allows for fast streaming reads of data, by keeping large amounts of data sequentially laidout on the disk

26.Why is it recommended to have few very large files instead of a lot of small files in HDFS

  • This is because the Name node contains the meta data of each and every file in HDFS and more files means more metadata and since namenode loads all the metadata in memory for speed hence having a lot of files may make the metadata information big enough to exceed the size of the memory on the Name node

27.What alternate way does HDFS provides to recover data in case a Namenode, without backup, fails and cannot be recovered

  • There is no way. If Namenode dies and there is no backup then there is no way to recover data

28.Describe how a HDFS client will read a file in HDFS, like will it talk to data node or namenode ... how will data flow etc

  • To open a file, a client contacts the Name Node and retrieves a list of locations for the blocks that comprise the file.
  • These locations identify the Data Nodes which hold each block. Clients then read file data directly from the Data Node servers, possibly in parallel.
  • The Name Node is not directly involved in this bulk data transfer, keeping its overhead to a minimum.

29.Using linux command line. how will you List the the number of files in a HDFS directory

  •      hadoop fs -ls

30.Using linux command line. how will  Create a directory in HDFS

  •     hadoop fs -mkdir

Big data Hadoop interview questions answers freshers and experienced - Part 2  

How to read a file in java with example program

  • We can read java file using buffered reader class in java
  • We need to import java.io.BufferedReader in order to read java text file.
  • Create object of java.io.BufferedReader class by passing new FileReader("C:\\Sample.txt") object to the constructor.
  • In order to read line by line call readLine() method of BufferedReader class which returns current line. Initially first line of java text file



 Program #1: Write a java program to read a file line by line using BufferedReader class

  1. package com.instanceofjava.javareadfile;

  2. import java.io.BufferedReader;
  3. import java.io.FileReader;
  4. import java.io.IOException;

  5. public class ReadFileBufferedReader {

  6. /**
  7. * @Website: www.instanceofjava.com
  8. * @category: How to read java read file line by line using buffered reader
  9. */

  10. public static void main(String[] args) {

  11. BufferedReader breader = null;

  12.  try {

  13. String CurrentLine;

  14. breader = new BufferedReader(new FileReader("E://Sample.txt"));

  15. while ((CurrentLine = breader.readLine()) != null) {
  16. System.out.println(CurrentLine);
  17. }

  18. } catch (IOException e) {
  19. e.printStackTrace();
  20. } finally {
  21.     try {
  22. if (breader != null)
  23. breader.close();
  24.      } catch (IOException ex) {
  25. ex.printStackTrace();
  26.      }
  27. }

  28. }

  29. }

 Output:
 
  1. Java open and read file
  2. Read file line by line in java
  3. Example java program to read a file line by line
  4. java read file line by line example
  
Program #2: Write a java program to read a file line by line using BufferedReader class using Eclipse

how to read a file in java

Can an abstract class have a constructor in Java

  • Yes we can define a constructor in abstract class in java.

  • Then next question will come like when we can not create object of abstract class then why to define constructor for abstract class.
  • It is not possible to create object of abstract class directly but we can create object of abstract class from sub class which is actually extending abstract class.
  • When we define a abstract class a class must extend that abstract class then only there will be use of that class
  • Then when we create object of class which extends abstract class constructor of sub class will be called from that abstract class constructor will be called and memory will be created for all non static members.
  • If we are not defining any constructor default constructor will be executed.
  • So we can define any number of constructor in abstract class.
  • And it is recommended to define constructor as protected. Because there is only one scenario which we can create object is from subclass so define abstract class constructor as protected always.

Order of  execution of  constructor in Abstract class and  its sub class.

  •  When we  create object of  class which is extending abstract class then it will call abstract class constructor through sub class constructor.
  • Lest see a java example program on abstract class constructor in java

Program #1: Does abstract class have constructor???

  1. package com.instanceofjava.abstractclassconstructor;
  2. public  abstract class AbstractDemo {
  3.  
  4. AbstractDemo(){
  5.         System.out.println("No argument constructor of abstract class");
  6.  }
  7.  
  8. }


  1. package com.instanceofjava.abstractclassconstructor;
  2. public class Test extends AbstractDemo{
  3.  
  4.     Test(){
  5.         System.out.println("Test class constructor");
  6.     }
  7.     
  8. public static void main(String[] args) {
  9.         Test obj = new Test();
  10.        
  11.  
  12. }
  13.  
  14. }


Output: 

  1. No argument constructor of abstract class
  2. Test class constructor

Can we define parameterized constructor in abstract class?

  • Yes we can define parameterized constructor in abstract class.
  • But we need to make sure that the class which is extending abstract class have a constructor and it should call super class parameterized constructor
  • We can call super class parameterized constructor in sub class by using super() call
  • For example: Super(2) ;
  • What will happen if we are not placing super call in sub class constructor?
  • Compiler time error will come.

Program #2: Can we define parameterized constructor in abstract class in java?

  1. package com.instanceofjava.abstractclassconstructor;
  2. public abstract class AbstractDemo {
  3.  
  4. AbstractDemo( int x){
  5.          System.out.println("No argument constructor of abstract class x="+x);
  6.  }
  7.  
  8. }


  1. package com.instanceofjava.abstractclassconstructor;
  2. public class Test extends AbstractDemo{
  3.  
  4.     Test(){
  5.         super(10);
  6.         System.out.println("Test class constructor");
  7.     }
  8.     
  9. public static void main(String[] args) {
  10.         Test obj = new Test();
  11.        
  12.  
  13. }
  14.  
  15. }


Output:
 
  1. No argument constructor of abstract class  x=10
  2. Test class constructor

Program #3: What will happen if we are not placing super call in sub class constructor?


does abstract class have constructor in java
Select Menu